In this series I will be building an ACA-py controller in Go which uses my go-acapy-client. The controller will be able to create connections and issue credentials. You can find the code for this project at github.com/ldej/issuer
The architecture for the issuer consists of four services: ACA-py, tails-server, postgres and the controller. The naming conventions around Aries are not completely crystallised yet, so let’s define what I mean with each term.
ACA-py (aries-cloudagent-python) is the python application that interacts with other Aries agents. The controller is the application that interacts with the admin APIs of ACA-py. The application as a whole, consisting of the four services will be called issuer.
ACA-py will use a postgres database as a wallet backend, that means it stores the wallet data in the database. The issuer will use postgres as well, but it will not interact with the wallet data directly. Instead, it will have a separate database to store information, for example user information. Any interaction with the wallet from the controller will happen via ACA-py, and never directly.
The tails-server is required for issuing credentials that can be revoked. The tails-server will receive requests from ACA-py to store and download tails files. The only action the tails-server performs on the ledger is receiving the revocation registry definition.
Three of the services have external interactions. ACA-py and the tails-server are the two services that communicate with the ledger, the controller will not interact with the ledger. The controller receives requests for example from a browser. ACA-py receives requests from other Aries agents. The tails-server receives requests for the tails-files. The url you use for the --tails-server-base-url will be used in the revocation registry definition transactions in the ledger, so agents can download the tails-file to verify presentation proofs.
I included nginx as a reverse-proxy to expose the controller, ACA-py and the tails server.
ACA-py docker image
I have found three methods of creating a docker image for ACA-py.
The first method is to use the docker file provided by the aries-cloudagent-python repository. The docker folder contains docker files for the creation of images. You can build the image with:
You can then run the image as if you are running aca-py directly from the terminal:
$ docker run -it --rm acapy start --help
The benefit of using the provided docker file is that you do not have to create anything yourself. The image supports connecting to a postgres database as a wallet backend as well. If you want to run a different version of ACA-py, you can check out your desired version of the repository and build the exact version you want.
If you want a bit more control, you can use the second method, which is to modify the docker file provided by ACA-py repository. In the end you will probably end up with the same configuration in the file, but it allows you to add more tools if you need them.
The third method is to create a docker file from the ubuntu:18.04 base image yourself, which is what I did initially. The only requirement I have is that I should be able to use a postgres database as wallet storage backend. For this to work I created a checkout of both hyperledger/aries-cloudagent-python and hyperledger/indy-sdk as indy-sdk contains the support for libindy to use postgres. I could build libindy as well, but decided to go for a stable version installed with apt-get.
One more useful tool I added is the docker-compose-wait which allows you to wait for a port to be available before continuing. This is useful because ACA-py requires the database to be running in case you want to use postgres wallet storage.
The postgres database uses the default postgres docker image. I added a volume (-v ./postgres:/docker-entrypoint-initdb.d/) when creating a container which contains an initialisation script for creating a database for the controller.
ACA-py will create a database in postgres with the name you provide as --wallet-name, so make sure you do not choose a wallet name that is already a database in postgres. ACA-py will create the database when you run it with the provision argument, or when you run it with start and the --auto-provision parameter.
The controller is a basic docker file basic on the Go alpine image. The only extra tool it contains is the docker-compose-wait script to wait with starting the controller until ACA-py has started.
FROM golang:1.15-alpineWORKDIR /go/src/controller/COPY controller .RUN go mod downloadRUN go install .# Add docker-compose-wait toolADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /waitRUN chmod +x /waitCMD /wait && controller
The next part in this series will discuss the functionality in the controller.
nginx & certbot
To expose the services to the outside world, I use nginx. I used this blog post as a source of inspiration for getting the easiest set up to work. The blog post provides a script called init-letsencrypt.sh which lets you easily set up an nginx container which serves the certbot certificate.
The most notable feature here is the LEDGER_URL. When I run the application locally, I do not connect to any of the hosted ledger, instead I connect to a locally running github.com/bcgov/von-network. The von-network browser is available on localhost:9000, but not within the docker network. Instead, you can connect to it by using the docker host IP address.
I also have a production environment file which contains publicly accessible URLs for ACAPY_ENDPOINT_URL, TAILS_SERVER_URL and http://test.bcovrin.vonx.io for LEDGER_URL.
When running the application locally, I override properties in the docker-compose file:
version:'3'services:controller:build:context:.dockerfile:docker/controller.dockerfilevolumes:- ./controller:/go/src/controllercommand:["sh","-c","/wait && go run ."]acapy:build:context:.dockerfile:./docker/acapy.dockerfiletails-server:build:context:./indy-tails-serverdockerfile:./docker/Dockerfile.tails-servervolumes:- ./tails-files:/tails-files/
Mounting the Go code for the controller makes it so I don’t have to rebuild the controller when changing code.
The last docker-compose file is only required for production as that is where I want to start nginx and certbot:
And that is how you can develop your own controller, running all services in docker containers, and expose it to the internet using nginx. It took me more time than expected to get all the pieces to work together, but the final result is a good start for any controller.
In the next post I will dive deeper into the functionality of the controller.
Please let me know if this was useful or if you have any questions.