VIRL November (virl.1.0.0) release is a major new release - moving from Openstack Icehouse to Openstack Kilo.
NOTE - performing an In-place upgrade from previous VIRL versions is NOT supported. A new installation image MUST be downloaded and installed.
Installation images are available for VMware Workstation, Workstation Player, Fusion Pro, ESXi and Bare-metal systems.
Please see the section below on Self-Service Downloads for instructions on how to obtain the appropriate installation image.
NOTE - SUPPORT FOR VIRL v0.9.293 WILL END ON 25th December. PLEASE UPGRADE AS SOON AS POSSIBLE!
Online training material is available - this is designed to help get you started and productive quickly - VIRL Learning Lab Tutorial - NOTE, this includes video walkthroughs - ensure that your browser supports H.264 video and any plugins are enabled.
Every registered VIRL user is now able to download the OVA and ISO images from https://virl.mediuscorp.com/my-account/. The new 'Download VIRL' link on this page will take you through to a self-service selection page where you are able to select the image you would like.
Please note that the downloads are large. The use of download manager application is strongly recommended.
Existing VIRL users are able to download the new OVA or ISO images themselves NOW using the following commands:
sudo salt-call saltutil.sync_all
sudo salt-call -l debug --master us-2.virl.info state.sls virl.ova.esxi
sudo salt-call -l debug --master us-2.virl.info state.sls virl.ova.pc
sudo salt-call -l debug --master us-2.virl.info state.sls virl.iso
sudo salt-call saltutil.sync_all
sudo salt-call -l debug --master us-4.virl.info state.sls virl.ova.esxi
sudo salt-call -l debug --master us-4.virl.info state.sls virl.ova.pc
sudo salt-call -l debug --master us-4.virl.info state.sls virl.iso
sudo salt-call saltutil.sync_all
sudo salt-call -l debug --master eu-2.virl.info state.sls virl.ova.esxi
sudo salt-call -l debug --master eu-2.virl.info state.sls virl.ova.pc
sudo salt-call -l debug --master eu-2.virl.info state.sls virl.iso
sudo salt-call saltutil.sync_all
sudo salt-call -l debug --master eu-4.virl.info state.sls virl.ova.esxi
sudo salt-call -l debug --master eu-4.virl.info state.sls virl.ova.pc
sudo salt-call -l debug --master eu-4.virl.info state.sls virl.iso
The command will pull down the virl image and place it in /home/virl from where you can then sftp the image out. NOTE - this is will download the full image that you've chosen (~4.7Gb) - this may well take a few hours to complete, so please be patient. You can check the progress of the download by periodically issuing the command '
sudo ls -lh /var/cache/salt/minion/files/base/images/[ova|iso]'.
When the command executes, you will see an output similar to the following:
Executing state file.managed for /home/virl/virl.1.0.0.pc.ova
SaltReqTimeoutError: after 60 seconds. (Try 1 of 3)
SaltReqTimeoutError: after 60 seconds. (Try 2 of 3)
SaltReqTimeoutError: after 60 seconds. (Try 1 of 3)
SaltReqTimeoutError: after 60 seconds. (Try 2 of 3)
Fetching file from saltenv 'base', ** attempting ** 'salt://images/ova/virl.1.0.0.pc.ova
This is expected when the download is under way.
You may see download attempts fail, reporting:
ID: pc ova copy trial
Comment: Unable to manage file: Message timed out
Duration: 60132.839 ms
ID: delete post copy
Comment: One or more requisite failed: virl.ova.pc.pc ova copy trial
Summary for local
If this occurs, please retry. The servers will be busy and you may need to switch between the available servers.
Every registered VIRL user will be receiving an email with a link to the HTTP download server location. This link is valid for 3 days. If you miss the 3 day window, please mail firstname.lastname@example.org including your VIRL order number or CCO ID/email that was used to purchase the license and specify your image type. Once processed, you'll receive an email with a download link location. Please do be patient.
Please use the installation guides posted at http://virl-dev-innovate.cisco.com/ and select the instructions appropriate for your platform.
VIRL software component versions
The release contains the following versions:
- Openstack Kilo
- VM Maestro 1.2.4 Build Dev-363
- AutoNetkit 0.20.9/0.20.22
- Live Network Collection Engine 0.7.20
- VIRL_CORE 0.10.21.7
YOU MUST UPDATE YOUR VM MAESTRO CLIENT TO 1.2.4 Dev-363 OR LATER - USING OLDER RELEASES IS NOT SUPPORTED! Download the new VM Maestro client from "http://your VIRL server IP/download". Once installed, update the available node types as follows:
- Launch VM Maestro
- Select 'File / Preferences / Node Subtypes
- Press 'Fetch From Server'
- Press 'Apply'
Platform reference model VMs
- IOSv - 15.5(3)M image
- IOSvL2 - 15.2.4055 DSGS image
- IOSXRv - 5.3.2 image
- CSR1000v - 3.16 XE-based image
- NX-OSv 7.2.0.D1.1(121)
- ASAv 9.5.1
- Ubuntu 14.4.2 Cloud-init
Linux Container images
- Ubuntu 14.4.2 LXC
- iPerf LXC
- Routem LXC
- Ostinato LXC
The images listed above are built into the VIRL installation image - no additional download is required.
Bare-Metal installation image (.ISO) - NOTE
The .ISO installer image will install the Ubuntu 14.4.3 operating system as well as all of the software stack for VIRL. Due to space issues, the CSR1000v image is not included in the .ISO installation image. Once VIRL has been installed, your salt-key has been applied and communication established to the Cisco salt-masters, you will be able to install the CSR1000v 3.16 image from the VIRL Software panel in the User Workspace Management interface.Use a Web-browser to log into the User Workspace Management interface and select the 'VIRL Software' tab from the panel on the left. Select the CSR1000v and then press 'Start Installation'.
SALT MASTER SETTINGS
Once you have installed VIRL, apply for VIRL license key as per the installation instructions. Update you salt-master list as follows:
us-1.virl.info, us-2.virl.info, us-3.virl.info, us-4.virl.info
eu-1.virl.info, eu-2.virl.info, eu-3.virl.info, eu-4.virl.info
You should enter at least two hosts picking a number between 1 and 4. Do not enter the same number twice! You can list up to four salt-masters. There must be a ',' and a space between each salt-master.
IOSv 15.5(3)M - On boot-up the following message may be observed:%SYS-3-CPUHOG: Task is running for (1997)msecs, more than (2000)msecs (0/0),process = TTY Background.-Traceback= 114ECF8z 130425z 15E20Ez 15DF30z 15DD3Dz 157D75z 158A2Bz 1589BFz 159B67z 153672z 3C9740Az 3C868CEz 3C89BEFz 5125F91z 491D86Cz 492E540z - Process "Crypto CA", CPU hog, PC 0x00157D2C
This is cosmetic and can be ignored.
IOSv 15.5(3)M / IOSvL2 15.2(4055) DSGS - CSCuv77089 - CVAC: day0 configuration only partially saved
When booting an IOSv or IOSvL2 instance within VIRL, it will insert the bootstrap configuration into running-config and report the following message:
*Aug 10 15:06:08.555: %CVAC-4-CONFIG_DONE: Configuration generated from file flash3:/ios_config.txt was applied and saved to NVRAM. See 'show running-config' or 'show startup-config' for more details.
The running-config is fully applied. However, the startup configuration only contains partial content.
Workaround: issuing the command 'copy run start' after the device has fully booted, will copy the running-configuration content to the startup-configuration as expected.Note: VIRL's configuration extraction function performs a 'copy run start' operation as part of its execution.
VIRLDEV-3140 - Live Visualization - ping with 50% packet loss - timeout reported
Configured a link with 50% packet loss and use the 'ping from' 'ping to' function. The ping 'failed' reporting the following:
ping 192.168.0.6 source 192.168.0.5
This issue impacts the ping function within the Live Visualisation system but does not impact the regular operation of pings from the VMs themselves.
Workaround: reduce the packet loss on the selected link.
VIRLDEV-3119 - Rehost operation - changing the internalnet_port IP address from 172.16.10.250 results in broken system
Changing the internalnet_port IP address from the default (172.16.10.250) value and then performing the 'vinstall rehost' operation results in an VIRL system which is not operational.
Changing the internalnet_port IP address is NOT supported.
VM Maestro - terminal preference for detached internal terminals - this function has been deprecated in VM Maestro 1.2.4.
Workaround: you can manual 'tear' the terminal pane from the main VM Maestro window. Use this in conjunction with the VM Maestro preference (Cisco terminal) - "multiple tabs for one simulation".
Openstack Kilo - this version of VIRL is based on the Openstack Kilo release and contains many robustness improvements over the previous VIRL releases.
Virtual Machines and Container images
IOS XRv 5.3.2 - An updated IOS XRv virtual machine is now available and becomes the default IOS XRv instance.
ASAv 9.5.1 - A new ASA1000v Adaptive Security Appliance virtual machine is now available.
ASAv goes through a double-boot before becoming active. This is normal and expected.
NOTE in order to run the ASAv VM, your server's CPU has to support SSE3 flags. Intel CPUs include this support. AMD CPUs need to be checked. SSE3 support IS present in Bobcat, Bulldozer and Piledriver CPUs. To confirm the correct CPU extension support is present, enter the following command '
cat /proc/cpuinfo | grep -e ssse3'
LXC Routem - A new LXC image is now available containing the Cisco Routem application. This application provides control-plane session simulation and the ability to inject prefixes based on a text-based configuration.
When deployed, the LXC-routem node can be accessed using the 'ssh' connection method. Telnet will NOT work.
Detailed example of the Routem configuration options can be found at http://your_virl_server_IP_address:19400/docs/routem/. Note that any changes you make to the routem configuration will NOT be saved if you perform a configuration extraction operation. A detailed video on the Routem application is available on the VIRL channel on Youtube - #VIRL Introducing Routem - Control-plane traffic generator - YouTube
LXC Ostinato - A new LXC image is now available containing the Ostinato packet traffic generator application. This application provides data-plane traffic generation capabilities. The Ostinato 'drone' (generator) is used in combination with the Ostinato GUI. The GUI can be obtained from Downloads – Ostinato.
When deployed, the LXC-ostinaro node can be accessed using the 'ssh' connection method. Telnet will NOT work.
The Ostinato 'drone' application will execute automatically when the LXC becomes active. A detailed video on using the Ostinato application is available on the VIRL channel on Youtube - #VIRL Introducing Ostinato in VIRL - data-plane traffic generator - YouTube
Feature suggestion and feedback - Log in to UWM and on the top right-hand side of the page, you'll find a new 'Feedback' button:
Click this button and you're able to post your suggestions for future product improvements as well as comments on the UWM interface itself. Feedback will be sent to the VIRL development team.
Let your voice be heard!
OpenVPN support - The new OpenVPN feature provides a VPN access mechanism, enabling users to connect from their host/laptop to their VIRL server, creating a direct connection to the Flat network inside of the VIRL server (172.16.1.x typically). This is especially valuable in cases where the Flat network is not directly reachable. The user will then have the ability to communicate directly with any device connected to the Flat network. This provides the ability to run applications such as 'snmpwalk' or the 'Ostinato GUI' on your laptop with the devices running on the VIRL host.
A detailed video on the OpenVPN function is available on the VIRL channel on YouTube - #VIRL OpenVPN in VIRL - vpn access solution - YouTube
Link latency, jitter and packet-loss controls - When a simulation is running, users are able to select links between
node in a simulation and set latency, jitter and packet-loss values on that link. This enables users to create links that have properties seen in the physical world such as trans-atlantic or transcontinental latencies or packet-loss. The link parameters can be applied on any link except for those connected to a FLAT or SNAT external connector. The values set by the user are applied bi-directionally, meaning that setting a latency value of 100msec will result in 100msec from node A to node B and 100msec from node B to node A for the return path (200msec total). The same is true for packet-loss. Ten packets sent from node A on a link with 10% packet-loss will result in 9 packets being received on node B. The packet loss will also be applied on the return path meaning that another packet may be lost between node B and node A.
A detailed video on the Link latency, jitter and packet-loss feature is available on the VIRL channel on YouTube - #VIRL Latency, Jitter & Packet loss feature introduction - YouTube
Static TCP port allocation controls - Users are able to specify the TCP port number that they wish to use when connecting to the console, auxiliary or monitor ports of a particular node running in their simulation. The port numbers are optional and can be set via the VM Maestro editor. The port number allocation is retained in the VIRL file and will be applied each time the simulation is started. Functionality is provided to be able to easily adjust the TCP port numbers in use.
A detailed video on the Static TCP port allocation feature is available on the VIRL channel on YouTube - #VIRL static TCP port assignment - YouTube
Web Editor - This version of VIRL includes an ALPHA release of a topology design tool that can run within a web-browser. The editor is available from within the User Workspace Manager interface when logged in as user such as 'guest' (NOT as uwmadmin). From the UWM main page, select 'My simulations' and press the button to 'Launch new
simulation' and then press the 'Editor' button to open up the editor. Press the 'Add & Connect Nodes' to place nodes into the topology, then press 'finish and return' to return to the main menu from where you can then set node-level and topology level properties. 'Run ANK' will generate the per-node configurations which can be reviewed and modified
prior to 'Sync'ing. Syncing saves the content so that it can be launched as a simulation via the UWM interface. Once the sync in complete, close the editor tab in your browser, enter a filename for your topology and press 'launch'.
A detailed video showing the workflow used with the Web Editor is available on the VIRL channel on YouTube - #VIRL Web-editor alpha detailed walkthrough - YouTube
VM Maestro - Java Runtime Environment bundled - VM Maestro no longer requires the user to have installed a Java
Runtime Environment. This is now included within the VM Maestro client binary.
VM Maestro active canvas - VM Maestro now provides an 'active canvas'. When a simulation is started and the user switches to the 'simulation perspective', a new window will be displayed showing the network diagram. As the virtual machines and LXCs boot, the diagram is updated to reflect the state of the simulation. Nodes will change colour reflecting their operational state.
In the example above, the nodes in green are in the 'active' state, while the blue node is in 'building' state. A 'grey' node is one that is yet to be started or has been stopped. Once in 'active' state, users can now right-click on the node to perform operations such as opening an SSH or Telnet connection, extract the configuration of the specific node and
stop/start the node. Right-clicking on the background, without any node being selected, enables the user to perform simulation-wide operations such as configuration extraction, launch the live visualisation view, stop the simulation as well as resetting all link latency, jitter and packet-loss parameters that may be in operation. If the simulation view
is closed, it can be re-established by selecting the simulation from the simulations panel, right-clicking and selecting the 'View simulation' option.
Link latency, jitter and packet-loss parameters can be set by selecting a link, right-clicking and using the 'modify link parameters' option.
Please see the video on the VIRL channel on YouTube - #VIRL Latency, Jitter & Packet loss feature introduction - YouTube
Packet capture operations can be performing by selecting a link, selecting the interface (at one end of the link) and right-clicking to reveal the packet-capture control menu:
Once a packet capture has been configured, an icon will indicate that a packet-capture is present on the interface, the Packet capture view listing the .pcap file that is available for analysis.
Additional diagram labels are now available including interface name, serial port number assignment etc. These can be access from the 'show topology labels' button on the VM Maestro toolbar:
VM Maestro - configuration export/import to directory - A new import/export function is available from the 'File'/'import' or 'File'/'export' menu. This function enables you to take the per-node configurations from within your .VIRL file and export them out to a directory location of your choice as individual text files (.cfg suffix). With the configuration files in the directory, you can then make changes as you wish and then, using the 'import' function, bring
the content of the individual configuration files back into the .VIRL file.
The filename of each of the configuration files matches to the node name in your simulation. If you have altered the
filename, the import system will highlight that the expected file is missing. Similarly, if there are files that do not have an equivalent node present in the the topology, the system will flag that.
Note - when performing an 'import' operation, the file to which you wish to import the configuration MUST be open in VM Maestro.
Note - if you wish to overwrite the node configuration, simply remove the content of the file but do not delete the file.
Please see the video on the VIRL channel on YouTube - enter link description here
Live Visualisation - new capabilities - The Live Visualisation function in VIRL has had a series of new capabilities added in this release.
Route table collection - collect the route table from every node in the simulation. The results will be reported in the Log View
Configuration extraction - initiate the collection of each nodes configuration and save to .VIRL file
Simulation shutdown - terminate your simulation from within the Live Vis view
*Ping from *- select a node, select ping from, select another node, select ping to and a 5 packet ping will be triggered from source to destination. The results will be reported in the Log view.
*Console port access *- open a console port to serial or aux ports from within the web-browser
Plot Routes to prefix (alpha) - select a node and the system will show the next_hops taken by traffic to this node's loopback address. This will work for nodes that are IOSv instances.
Traceroute - when a traceroute is executed, floating the mouse pointer over the path will show information about the path taken
Infrastructure-only configuration generation- Autonetkit now offers the ability to generate a stripped-back
configuration that provides the basic infrastructure configuration required to support configuration extraction and Live Visualization.
With this function enabled, no IP addressing or routing protocol configuration will be created, leaving the node in a state where it is ready for manual configuration. This is ideal when using a simulation for study practice or when wanting to go through the process of building up an environment by hand.
The feature is enabled by selecting the 'infrastructure only' option at the topology level under the 'autonetkit' tab in VM Maestro:
This capability is also available in the Web-Editor, again at the topology level:
NX-OSv Mac address injection - Autonetkit will now insert a mac-address line to the node configuration for NX-OSv instances. Simulation-wide unique mac-addresses are generated that can be subsequently be edited prior to simulation
Simulation 'name' support - When a simulation is started in VIRL, a unique simulation ID is created. If using VIRL simulation in conjunction with CLI or REST API calls, it is helpful to have a predictable name or 'label' that you can define and assign to the simulation. This means that the simulation name can remain the same, making scripting and programming easier.
The simulation name can be set via VM Maestro using the 'Simulation launch with options' function:
It can also be set via UWM when starting a simulation. From the UWM main page, select 'my simulations' and press the button to 'Launch new simulation' and setting the 'simulation name' field:
Python Client Libraries - Python client libraries for STD and UWM services are now available. These are available as .whl files that can be downloaded from http://your_virl_server_IP_address/download/ with installation information at http://your_virl_server_IP_address/doc/clients.html. The package can be installed using the command '
sudo pip install <filename>.whl'. This provides the VIRL STD, UWM and Openstack CLIs and client modules which can be 'imported' into your Python projects.
The 'pydoc' function is available for these modules and can be accessed by issuing the commands
'pydoc virl_client.std', '
pydoc virl_client.uwm' and
'pydoc virl_client.openstack'. Help can be obtained by issuing the commands
The .whl file does not need to be installed on the VIRL server - it is already installed. The modules should be installed on your local Linux workstation. This .whl is specifically for Linux.
STD and UWM API documentation - The STD and UWM API documentation is now integrated in a new 'API
explorer' interface, enabling users to browse the available REST calls and see examples of the call structures. The documentation can be found in UWM, under the 'documentation' link.
User Workspace Management interface revamp - the UWM interface has undergone an overhaul. All of the previous
functionality has been retained with some reordering of the menus to provide a better, more logical structure.
Usage information reporting control - Within UWM, under the 'VIRL Server', 'System configuration' tab, you'll find a new option tab entitled 'Usage reporting'. With the field enabled, the server will periodically send anonymous usage details back to Cisco for the purposes of product improvement.
No information is transmitted relating to the configurations that you have loaded in your devices. Information is gathered on the type of virtual machine images that are in use, the use of features and capabilities within VIRL etc. The information is transferred in clear-text.
If you would prefer not to send such information, please clear the check-box and press the 'Apply Changes' button.
System Resource RAM and CPU overcommit controls - Within UWM, under the 'VIRL Server', 'System configuration' tab, you'll find a new option tab entitled 'Resources'. The Resource controls provided here enable you to tune the RAM and CPU overloading that your system will provide.
If your system has 4Gb of free memory and your simulation requires 5Gb, the simulation will fail to start. By increasing the RAM overcommit value, the system can support more Virtual Machines at the expense of performance. This will enable you to run larger simulations but care must be taken not to completely overwhelm your system. If you were to try to run 20 IOSv instances in 4Gb of memory with a high overcommit value, the VMs may boot but may never stabilise to a point where they operate in a satisfactory manner.
The overcommit value can be reset via this menu too.
VIRL Service TCP ports - reconfiguration controls - additional options are now presented to the user in order to be able to reconfigure the TCP ports on which key services are running. Within UWM, under the 'VIRL Server', 'System configuration' tab, you'll find the option tab entitled 'VIRL Services' where the services and their default ports are listed. Modify the port values to suit your needs and press the 'Apply changes' button to start the system reconfiguration
Customer defects resolved
The following customer-found defects are resolved in this version of VIRL:
VIRLDEV-2841 - AutoNetkit custom IPv4 and IPv6 loopback fails if non-router present
Custom IPv6 loopback address block is not being used when set.
This issue is now resolved
*VIRLDEV-3016 - UWM - System configuration - VNC server is not enabled when feature is turned on *
When the 'enable VNC' option is set in the UWM system configuration controls, the VNC server is not started.
This issue is now resolved