Tech Blog (csantosb)

Continuous Integration with VHDL

gitlab-ci.png

Writing VHDL code is fine with me. Testing it is a different matter. Automatizing tests is something I’ve always wanted to implement in my workflow but never had time for.

Now the time has come.

To me, automatic tests involve writting a set of testing routines (as complicated as needed, this is not the point now). Then, these routines must be run at specific events, triggered at the right time, for example when a new commit happens. This is continuous integration (CI from now on).

With this post I aim at summarizing the steps I had to follow to doing CI property, starting from writting tests locally, up to deploying a docker image and finally scripting the ci pipeline with the required jobs. I will be using it as a reference to further work on this topic, so I’ll try to keep it up to date.

Additionally, I have developed a couple of examples to practice and to demonstrate the feasibility of this approach to automating tests:

  • Mux-fifo multiplexes a series of parallel data streams into a single fifo
  • Ft2232h is a controller for the usb chip of the same name

both including self checking. Feel free to search for inspiration in here.

I’m using here Gitab’s CI infrastructure, by the way.

Tests

Once one has developed a design testing it becomes mandatory. This implies most frequently complying with some kind of specification to be described in a testbench. The kind and complexity of the testbench is a matter of taste mostly (when one has the choice). Same happens with the language to use (VHDL, (System)-Verilog, C/C++, Python, etc). Options do exists.

To me, the most important at this point is the following. First, don’t rely on a wave chronogram to check your design, as this has strong limitations. You will never be able to verify all transitions on a large number of signals during a long time period. If you do, you are limiting yourself to a corner case, and this is a bad practice. Secondly, don’t just check correctness with a unique test vector, whether you produce it at run time, whether you generate it by other means. Once again, this would imply accepting a part as a whole: a particular testing case is not enough. Don’t get me wrong. Proceeding this way is fine as long as you are developing and debugging your tests. But this cannot be THE tests. I hope my point it’s clear.

Following with the previous examples, a couple of testing routines for the mux-fifo and ft2232h projects have been developed using cocotb. They both provide a means of obtaining a wave trace by setting DEBUG=1; otherwise, they just execute a series of tests to an end when they succeed. They may be executed locally by a simple make command: refer to each project documentation for details.

Once we have our design and tests, we need an environment from where to run them. Enters docker.

Docker

Docker is the technology behind a kind of lightweight virtual machine, running customized images. The images are defined using plain text files called Dockerfiles, and each new image builds on top of a previous one. So for example, an image of a gnu/linux distribution may be extended by adding extra packages providing a given application, and the resulting image may be stored in a remote repository of docker images, ready to be used. To notice that the idea of this approach is having access to the application, not the distribution, which is only the support. This means that most frequently base distribution images are rather minimalist in order to reduce the image’s size. That’s it. Any extra information about docker may be found, as usual, in the arch wiki, no need to repeat it here.

As a practical example, let’s build a custom docker image based on a minimal archlinux. The dockerfile starts FROM an existing image, and once inside the image, ADDs a.tag.gz file to a temporary directory and RUNs standard shells commands: syn repositories, install packages, configure a non root user, change USER and WORKDIR, RUN a couple more commands to install latest ghdl-gcc-git and python-cocotb-git packages from the AUR, and then compile xilinx vivado libraries so that they are always available. Then, do a bit on cleanup to reduce the image size. Finally, build the image with

docker build -t csantosb/arch-vhdl .

and chek with

docker images

You’ll see something like

REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
csantosb/arch-vhdl   latest              2bad8ac856ad        2 days ago          1.25GB
archlinux            latest              9651b9e35f39        2 weeks ago         412MB

Once you have the image, you can get into it (this is a container in dockers parlance) with

docker run -it csantosb/arch-vhdl /bin/sh

and validate all previous steps. You can even mount shared directories from the local host, and proceed from within the container. By the way, get out of the container with exit (it helps), or include any other aditional package you wish, but remember this is volatile storage.

Finally, you have the possibility to upload the image to the docker hub repository (once you have your credentials) with

 docker push csantosb/arch-vhdl

from where it will be accessible with a simple

 docker pull csantosb/arch-vhdl

after removing it locally.

At this point, we have a custom environment with all the necesssary tools we need to implement the previous test.

Gitlab’s CI

Gitlab provides much more that’s necessary to manage a project. Among other interesting features, it provides a handy CI infrastructure, well documented and available with free projects. One may get access to CI through a configuration script file, where it must be specified the environment to use, the jobs to run and its contents, the ordering of jobs in stages (a pipeline), etc. Based on this file, CI is triggered upon new commits (this may be customized, as most of the rest). This way, one have a way to perform a series of configurable tasks to verify correctness of the code (or any other thing one may imagine). In particular, it’s possible to run a series of tests on a firmware design, returning a visual indication if things are going on as expected.

Let’s see how it happens in detail with one of the previous examples, ft2232h (notice the funny badges above the table of contents: green means you may be satisfied with your work). Here, we are using a .gitlab-ci.yml file to tell gitlab’s CI engine what to do when a new commit hits the repository. First, the docker image to use, the one we defined previously, defining the environment. Follows a set of instructions to be run before any job: this avoid having repeat them several times below.

Variables may be custom or standard ones provided by gitlab, and stages group jobs in sucessive steps: the previous needs to succeed in order for the next to run. Most usually, you’ll want to build something, then test things and finally deploy somewhere (meaning moving to production). Follow the jobs.

First job (which only runs in git master branch) belongs to the build stage: it runs a dummy test, just in order to produce a ghdl executable file. Declaring the resulting directory as an artifact, its constents are transferred to next stage, avoiding the need to rebuild the executable back again. For huge projects, this makes a difference. All remaining jobs, related to the test stage, execute the set of commands under script (make …), saving the products as artifacts which may be downloaded afterwards. In particular, the xml report may be observer directly from the gitlab gui. Realize that each job gets executed in a new, clean environment (a docker container built on the defined image, running on top of a CoreOS distribution in a single cpu; google computing cloud vm with 25 GB of disk and 3.75 GB of ram, in case you were wondering).

Finally, a further job in the deploy stage (which gets executed when all tests suceed) produces the pages for the design, based on doxygen, embeded comments and the provided Doxyfile. To note how here, we are definning locally to the job the before-script and the image to be used, overiding the global definitions.

All together

As a result, for each new commit, one gets (in the case of the mux_fifo example) a new pipeline of jobs (automatized test in a custom environment), retuning a failure (along with the accompaning email) or a success status. Exactly what we were looking for, right ?.

Conclusion

More advanced features do exist: refer to the documentation for details. Here I only introduce what’s necessary for my particular use case.

Comments powered by Disqus
  • Shares: