I've run tests like this resulting in jtl files greater than 25GB, I used the
method mentioned by Nermin, running over 20 servers each running a test that
was configured to execute only 5% of my target load. Works absolutely fine.
(Note. Amazon is your friend for such shenanigans.)
Obviously: use CSV mode, don't log responses, run on the command line.
If you end up with data this big then you really need to be aggregating the
files in situe before trying to download them and I didn't bother trying to
view them locally in the JM GUI either. Instead I loaded the data into mySQL
and played with it from there. There's some pretty cool open source ETL
tools to help with this.
Also, we've recently started playing with mongo for super big datasets -
man, it is fast.
-----
http://www.http503.com/
--
View this message in context: http://jmeter.512774.n5.nabble.com/Very-long-tests-with-huge-JTL-log-file-tp4783083p4783329.html
Sent from the JMeter - User mailing list archive at Nabble.com.