The Merge Swag!

Sometimes when you’re building a huge infrastructure with tons of microservices, could be a little nightmare to have everything well documented in a single access point.

In our case, we want to display our information using Apiary and we have several endpoints documented with Swagger.

If we use our current flow, we will generate one API per each endpoint, which makes sense but looks horrible!

Multiple Apis

So, what we could do in this case? It’s not an option to maintain all our different documentation in a single swagger cause are different pieces of software deployed in different platform and ways so… why not merge all of those swagger files into a bigger one?

Swagger-combine

Surfing the internet, I found a node tool called ‘swagger-combine’ which will help us with our task, with some config and placing it inside a Docker we could manage the documentation problem inside our pipelines.

Installing the tool and configuring it

The installation is quite easy, it’s a node tool so with a simple ‘npm install -g swagger-combine’ we will have it ready for use with our favorite shell.

(Just in case you don’t have NPM, check here!)

Once installed, we just need a single file with the configuration, which will contain the ‘swagger info’, general information of our API, definition, ports, base path, that kind of stuff, after this general section, we will have another named ‘apis’, here is where the real magic happens, we will add the different urls to each of our swagger files (and in our case the gitlab token to get access) and that’s all the configuration needed. Let me show you a little example:

Merge it!

Well, now we have everything ready to start so, open a new shell, go to the folder where you placed the config json and type ‘swagger-combine config.json -o combinedSchema.json’, this will generate a new file in the output desired (combinedSchema in this case) with all the endpoints information inside.

Ok but… for what I want this?

Right now, we have a single swagger file with many endpoints, but we still need to show them in Apiary, one of the options is open the browser, go to your apiary account and paste the json inside the Apiary Editor, this will work but… what about automatize it?

Hello Jenkins

Our current pipeline is a little complex and cover a lot of different topics so I just will extract the ‘publish to Apiary’ function. We have a Docker with curl and swagger-combine installed as Jenkins slave, so, when we deploy a new endpoint, once the build is done, we will launch the Docker and publish the documentation in Apiary, if for some reason fails, we will stop the pipeline and log the error, we can’t accept an endpoint without proper documentation.

Thanks to this, we could have our fabulous Fexco Central API with all our endpoints together.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.