Go Back to Redneck Programmer


What it is? A simple Ruby script to kind of simulate multiple users hitting a web site or web application. You pass in the number of threads (simultaneous users) and a name of a text file which contains 1 URL per line.

What it is NOT? Robust or "accurate". It's a simple hack -- it starts up the number of threads you tell it to, but they are Ruby threads, not "real" OS-level threads. Each thread loops through the array of URL's in the text file and retrieves them. It is a simple HTML retrieval though, not a true simulation of a real browser. This pulls the HTML, will follow redirects, etc, but then chucks the response and moves on. A web browser would retieve the HTML, parse it looking for external resources (CSS, images, JS files, etc) and then retieve those as well, then render it all and run any embedded client side scripts. This does none of that. (unless those URL's are in your text file, I guess)

I find it useful for simple tests and simulating light loads, but don't use it to verify it pulls "good" info, and don't use it to test responsiveness of the webserver. Mostly just use it to monitor open sessions, database connections, memory usage, etc on the server as volume of request ramps up.

The source code is at http_load_tester.rb

#!/usr/bin/env ruby
require 'net/http'
require 'uri'

## very basic single-page fetcher, will follow redirection but only fetches initial response
## does not go back and fetch linked resources like css, images, js files, etc.
class HttpFetcher 
  def fetch2(uri_str, limit = 10)
    raise ArgumentError, 'HTTP redirect too deep' if limit == 0
    response = Net::HTTP.get_response(URI.parse(uri_str))
    case response
    when Net::HTTPSuccess     then 
      puts "success "
    when Net::HTTPRedirection then 
      print "redirect "
      fetch2(response['location'], limit - 1)
    when Net::HTTPClientError then
      puts "client error"
    when Net::HTTPServerError then
      puts "server error"
    #puts "** NEW REQUEST\n#{response.body}"    

## this is what ramps up the requests to multple threads
## each thread looping through and fetching each url in the array once
## future enhancement might be pass in param for number of times each thread runs through array of URL's
class HttpLoadRunner
  def fetch_in_threads(uri_strs, num_threads)
    $cnt = 0
    tt = []    
    num_threads.times  do
      $cnt += 1
      thread = Thread.new do
          x = HttpFetcher.new
          uri_strs.each do |uri_str|
            puts "Thread #{$cnt} ** #{uri_str} ** starting"
            x.fetch2(uri_str, 2)
            puts "Thread #{$cnt} ** #{uri_str} ** done"
          puts "Exception"
      tt << thread
    tt.each {|thr| thr.join }

## next line checks to see if this script is being run itself or run as library
## only runs this tuff if standalone
if __FILE__ == $0  
  if ARGV.length != 2 then
    puts "Usage: #{$0} num-threads url-file"
    puts "\twhere num-threads is an integer indicating number of simulated users (threads)"
    puts "\tand url-file is path/filename to a text file containing 1 URL per line"	
  numthreads = (ARGV.first == nil) ? 1 : ARGV.first.to_i
  puts "Number of threads to run: #{numthreads}"
  filename = ARGV[1]
  puts "URL filename: #{filename}"
  test_urls = []
  File.new(filename, "r").each { |line| test_urls << line }
  puts "URLS:\n -> #{test_urls.join("\n -> ")}"
  lr = HttpLoadRunner.new
  lr.fetch_in_threads(test_urls, numthreads)