Linux containers allow for easy isolation of developer environments. If you're often working with a bunch of different ROS versions, it's a lot easier to do your work entirely in containers.
You'll first need to install LXD using snap.
ubuntu@lxhost:~$ sudo snap install lxd lxd 3.21 from Canonical✓ installed
Throughout this guide I will be using the hostname to distinguish which machine I'm running commands on.
lxhostrepresents the bare metal machine you'll be creating containers in.
ros1-liveis the container we'll be creating later
remote-pcis a different machine on the same LAN as lxhost
Pay attention to the value after the
@in the shell prompts to make sure you run commands on the right machine.
If you haven't added
/snap/bin to your PATH yet, you'll want to in order to call the programs snap installs.
ubuntu@lxhost:~$ export PATH=/snap/bin:$PATH
You'll need to initialize LXD before your first run. I'll be using default settings for this example, but you should look into the settings to make sure of what you're getting into. The exact printout may vary depending on your system, as well as what version of LXD you're using
ubuntu@lxhost:~$ lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: ubuntu@lxhost:~$
This handled setting a number of configuration options for us, as well as creating a storage pool for any containers we make and a network bridge (named
lxdbr0 above) that will connect our containers to the network.
LXD runs as a daemon on your system that is accessible to
lxdgroup on your system. Users with access to LXD can attach host devices and filesystems, presenting a security risk. Only add users you'd trust with root access to lxd.
You can read more about LXD security in its documentation.
The LXC command
LXD provides an
lxc program you'll use to actually manage the containers. You can invoke this with
lxd.lxc, but lxc works just fine. Make sure your PATH resolves to the correct lxc.
ubuntu@lxhost:~$ which lxc /snap/bin/lxc
If you installed lxc via
apt before, and your
/usr/bin directory comes before
/snap/bin in PATH, you'll run the wrong lxc command. You probably should go ahead and uninstall that lxc.
LXC provides a number of verbs:
ubuntu@lxhost:~$ lxc alias copy file info manpage operation publish restart start cluster delete help init monitor pause query restore stop config exec image launch move profile remote shell storage console export import list network project rename snapshot version
(Output from tab completion)
We'll go through some of these, but you can always use
lxc <verb> --help to get more details (or
man lxc.<verb>, but those are currently just autogenerated from the help text).
Creating a container
We'll start with a ROS Melodic container. I'm currently running Ubuntu 20.04 Focal preview, so this is actually the easiest way in my case to work with an older ROS.
ubuntu@lxhost:~$ lxc launch ubuntu:18.04 ros1-live Creating ros1-live Starting ros1-live ...
Just like that, we've spun up a container. An Ubuntu 18.04 machine named ros1-live is now available.
lxc list will show it.
ubuntu@lxhost:~$ lxc list +---------------+---------+-----------------------+-----------------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+-----------------------+-----------------+-----------+-----------+ | ros1-live | RUNNING | 10.141.221.65 (eth0) | | CONTAINER | 0 | +---------------+---------+-----------------------+-----------------+-----------+-----------+
To work in it, we'll use
lxc exec to run a login shell as the default ubuntu user:
ubuntu@lxhost:~$ lxc exec ros1-live -- sudo -iu ubuntu bash ubuntu@ros1-live:~$
Depending on how we like to develop, we may prefer to ssh in (for instance, to use vscode's remote tools). We'll need to add our keys to the container to do so.
If our public key is stored at
~/.ssh/id_rsa.pub, we can add it in a number of ways.
Here we'll simply create an authorized_keys file with our public key in it, and push it
ubuntu@lxhost:~$ cp ~/.ssh/id_rsa.pub /tmp/authorized_keys ubuntu@lxhost:~$ lxc file push /tmp/authorized_keys ros1-live/home/ubuntu/.ssh/authorized_keys -p
-p argument creates parents, as per
We can now ssh into our container from the host and other containers. You can get the ip address using
ubuntu@lxhost:~$ lxc list +---------------+---------+-----------------------+-----------------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+-----------------------+-----------------+-----------+-----------+ | ros1-live | RUNNING | 10.141.221.65 (eth0) | | CONTAINER | 0 | +---------------+---------+-----------------------+-----------------+-----------+-----------+ ubuntu@lxhost:~$ ssh firstname.lastname@example.org ubuntu@ros1-live:~$
You may want to add this address to your
Host ros1-live HostName 10.141.221.65 User ubuntu IdentityFile ~/.ssh/id_rsa ForwardAgent Yes ForwardX11 Yes
If you'd like to access the container from a remote pc, the default bridged network setup makes things tricky, as the containers are behind a NAT on the host. The easiest way to ssh in is to use the host as a jumphost.
ubuntu@remote-pc:~$ ssh -J <host-address> ros1-live
or in the config file
ubuntu@remote-pc:~$ cat ~/.ssh/config Host ros1-live HostName 10.141.221.65 User ubuntu ProxyJump <host-address> IdentityFile ~/.ssh/id_rsa ForwardAgent Yes ForwardX11 Yes ubuntu@remote-pc:~$ ssh ros1-live ubuntu@ros1-live:~$
If your remote PC is running Windows, you're likely using a version of ssh without the ProxyJump command. I recommend downloading the latest release of OpenSSH and putting it at the front of your Windows PATH variable, so VSCode and other tools can use it.
If you need more direct access, you can add an lxc proxy device to the container to forward a port into it
ubuntu@lxhost:~$ lxc config device add rosl-live proxy22 proxy connect=tcp:127.0.0.1:22 listen=tcp:0.0.0.0:2222 ubuntu@lxhost:~$ lxc config device show ros1-live proxy22: connect: tcp:127.0.0.1:22 listen: tcp:0.0.0.0:2222 type: proxy
This will listen on the host port 2222 and forward connections to the container's port 22. You can learn more about proxy devices here.
If you don't want the container behind a NAT on the host, you can specify a different bridge configuration or add an IPVLAN or MACVLAN
nic network device to the container. You can read more about
nic devices here.
Sharing Files Transparently
You can neatly share directories between the host and the container using
disk devices. Disk devices can be a bind-mount of an existing file or directory, a regular mount of a block device, or one of several other source types. You can read about
disk devices here.
Lets create a directory and share it with the container:
ubuntu@lxhost:~$ mkdir share && cd share ubuntu@lxhost:~/share$ mkdir ros1-live ubuntu@lxhost:~/share$ lxc config device add ros1-live share disk source=~/share/ros1-live path=/home/ubuntu/share
In order to truly share access, we'll want to use the
raw.idmap config option for the container to map your UID and GID. Assuming your UID and GID on the host is 1000 (the default for a single user Ubuntu installation), you'll use the following command to set the option:
ubuntu@lxhost:~$ lxc config set ros1-live raw.idmap "both 1000 1000"
Now any files you make and modify in the host and the container will be indistinguishable, permission-wise.
To verify the directory is shared:
ubuntu@lxhost:~$ ssh ros1-live ubuntu@ros1-live:~$ ls share
Set up ROS
For once, setting up ROS will be the easy part. Just follow the instructions at the ROS wiki like normal.
Developing in the Container
workspace directory in the share directory, and use it as normal to develop.
If you intend to interface with actual hardware, you'll need to attach devices to your container. I've linked to the
device configuration option several times before, but you'll want to look at it in depth for configuring your hardware.
For example, if you're controlling an OpenManipulatorX via a U2D2, you'll need to communicate via serial. This is done via a Unix character device at
/dev/ttyUSB0 or similar. To add it to the container as a unix-char device, use
ubuntu@lxhost:~$ lxc config device add ros1-live ttyusb0 unix-char source=/dev/ttyUSB0 path=/dev/ttyUSB0
You can also forward the entire usb device using the vendorid and productid of the device as you would for a udev rule. See the entry on usb devices.
You may notice GUI programs don't work, either well or at all, in containers. Simos Xenitellis wrote a wonderful guide on how to fix this and you'll be able to run Gazebo, RViz, et al in your containers.
Create a new
profile for your containers as follows: (Note: this will use vim, if you do not know vim, use the alternative further below):
ubuntu@lxhost:~$ lxc profile create gui ubuntu@lxhost:~$ lxc profile edit gui
and add the following to the config file that opened in vim
config: environment.DISPLAY: :0 raw.idmap: both 1000 1000 user.user-data: | #cloud-config runcmd: - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf' - 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile' packages: - x11-apps - mesa-utils - pulseaudio description: GUI LXD profile devices: PASocket: path: /tmp/.pulse-native source: /run/user/1000/pulse/native type: disk X0: path: /tmp/.X11-unix/X0 source: /tmp/.X11-unix/X0 type: disk mygpu: type: gpu name: gui used_by:
Alternatively, if you do not know or like touching vim, save the above to a temporary file, say,
/tmp/foo.yaml, and do the following instead:
ubuntu@lxhost:~$ lxc profile create gui ubuntu@lxhost:~$ lxc profile edit gui < /tmp/foo.yaml
(see documentation on profiles here).
To summarize the above, we tell lxc that every device with a
gui profile on it should
- Do the
raw.idmapconfig from earlier.
- On cloud-init, disable shm in
- Set the pulse audio server to a socket
- Mount the pulseaudio socket from the host to to
- Mount the X11 socket
- Mount your gpu
We apply this profile to the existing container using:
ubuntu@lxhost:~$ lxc profile add ros1-live gui
Which adds the
gui profile alongside the already applied
default profile to the container.
Restart the container using:
ubuntu@lxhost:~$ lxc restart ros1-live
And after ~30 seconds of downloading and installing the packages, the container should be up and ready to use. You can get GUI apps over ssh with X11 forwarding, or just from the
lxc exec method of getting a shell.