Setup Guide

This guide should help get your environment setup to use SAFE. Please follow the instructions in the respective section below, depending on how you will be using SAFE.

Server

The server is the heart of the SAFE framework. It contains the database, the Experiment Execution Manager (EEM), user compartments on disk, and (eventually) the termination detector. The following steps outline the setup of all of the necessary server components.

Repositories

First, it is necessary to download the most recent version of SAFE. To do so, clone our Mercurial repository:

hg clone http://code.nsnam.org/safe/safe

Virtualenv (optional)

It is recommended to create a virtualenv before installing all required dependecies. The advantages are that it will not be require root access to be install SAFE, and it avoids possible conflicts with other python programs that maybe using distinct libraries versions.

To setup virtualenv, inside the safe directory use:

virtualenv -p $(which python2.7) virtualenv

The -p flag will make sure that virtualenv installs python 2.7, not the python 3.x that is not yet supported.

Then, activate it:

source virtualenv/bin/activate

It is necessary to activate the virtualenv in every session that will possibly execute the server.

Dependencies

The following libraries and applications are required on the server:

To install all requirements, use:

pip install django django-model-utils twisted sphinx fabric South djangorestframework requests

Setting up the database

The database is a fundamental piece of this framework, as it keeps track of the current state of all experiments executions and also store its results. To simplify the database management, all server programs use the same Django database layer.

This layer of abstranction provides object-relational mapping (ORM), what makes easier to manipulate the database. Also, it makes the framework independent of a specific database software.

To configure the database, it is necessary to edit Django configuration file. This file is located at server/web/source/settings.py, and it is important to edit the DATABASES parameter.

Django has built-in support for SQLite. To use it, just specify the database engine to be django.db.backends.sqlite3. However, in a production enviroment it is recommended to use a more robust engine as MySQL or Postgres. Using them may require the installation of specific adaptors to Django. Please see Django documentation for more information.

After the settings are properly configured, it is necessary to setup the database schema. This is can be easily done by executing in the server/web folder:

./manage.py syncdb --all

./manage.py migrate --fake

After executing the syncdb command for the first time django will ask you to set up an Auth model. Enter yes and provide the information requested. This will create a super user, which will be used to perform any other configuration, as well as for running the simulation and creating new users.

Populating the database

The previous commands will not populate the database for you. You will need to use a database manager as to configure the database properly before the first execution of SAFE. Examples of these softwares are PHPMyAdmin (for PHP), or SQLite Manager Firefox Plugin. We are working to allow this setup be done by the normal user interface with SAFE.

To populate the database, add rows with proper information to these tables:

  • safe_worker: This table holds informations of all the worker machines. It is important to specify the path where the safe installation can be uploaded, and the host in the format of user@machine_ip. Currently there is only support for public-key authentication, meaning that the public key of the server should be in the ~/.ssh/.authorized_keys of the worker machine.
  • safe_installation: This table holds information about a single installation. The important fields there are the tarball_path, a complete path to where in the server the tarball that will be distributed to the workers are located, and simulator_folder, the name of the folder inside the tarball that has the ns-3 installation.
  • safe_expdescription: This table describes all experiments that one installation can have. It is important to fill the script field with the name of the ns-3 script to exexecute. This script should be the same as when you directly executes it in NS-3 by using ./waf --run [SCRIPT]. Another importanting field is the installation_id that should have the ID of the installation in which this experiment belongs to.
  • safe_experiment: This table should represent a experiment to be executed. It is important to specify the number of replications that have to be done for each experiment design point, and what experiment description (exp_description) it fits into.
  • safe_designpoint: This table holds a design point. One design point should relate all necessary information to fully determine the necessary input to run an experiment. It is important to fill the experiment_id that indicates which experiment it belongs to.
  • safe_factor, safe_floatfactor, safe_integerfactor, safe_booleanfactor: This should be the factors that will be used as input of one experiment. The factors are only the name of the input parameters, not the value itself. For instance, a M/M/1 queue has the factors of Interarrival mean time and the Service mean time. It is important to note that will be require to add rows in two tables for each factor: One in the generic safe_factor and another in the specific safe_TYPEfactor. The name has to match with the name of the input variable that the NS-3 script requires and the exp_descrption_id has to be the ID of the experiment description that was created before.
  • safe_level, safe_floatlevel, safe_integerlevel, safe_booleanlevel: The level describes one input value of a factor. As factors, it is necessary to create a generic row in safe_level and another in safe_TYPElevel. It is important to specify the factor_id of the factor it describes, the experiment_id of the experiment that it belongs to, and the design_point_id that this level describes.

That would be enough for a first simulation run. Other tables like Result and Metric are going to be automatically filled by the server.

Client

A client machine can be thought of as a worker machine. In a MRIP scenario their will be numerous clients all carrying out simulations in parallel and marshalling simulation data back to the EEM over a network. Each machine can be set up as follows.

Dependencies

The following libraries and applications are required on each client:

Notes About Clients and Dependencies

At this time SAFE assumes that, before experiment execution, clients will have the necessary dependencies to run both SAFE and your ns-3 script. This means that if you were to manually take the tar file packaged by the user and unpack it on a client, you would have no problem building and running that ns-3 script.