Oct 11 2016

Duplicate random generated data in Linux based VM

A few weeks back i create a blog post about how to create random data in a Linux (CentOS) based VM which you can read here.  Based on that post i have received a few comments and one in particular i’ll address in this blog post and it was about the time it takes to generate the amount of data.

It will actually take quite some time but if you are good with generating a base file with random data and copy that file multiple times instead you can go ahead and use the script provided in this blog post instead.

You need to specify 4 things in the script:

  • logfile – full path to the log file destination including log file name
  • basefile – the file that will be created by the script from urandom and contain random data.
  • file copies – naming convention for the files created from the base file
  • numfiles – number of copies to be created.
  • filesize – the size of the base file
  • /datatest – directory specification

The script will create a base file including random data and then make as many copies of that file you specify.

Below is the scrip, creating 10 1GB files,  i used for some testing.

This is SSH session output when i ran the script in my test environment today, only creating 4 files.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">