Managing a Tarantool cluster via a command-line interface

Managing a Tarantool cluster via a command-line interface

·

13 min read

Author: Alexey Romanov

Two years ago, we already discussed how to develop distributed applications with Tarantool Cartridge. This comprehensive framework includes a CLI tool to simplify the development and usage of Tarantool Cartridge applications. In this article, I will cover the features of Cartridge CLI and explain how this tool can help you use your local applications more efficiently. The article is mainly targeted at those who are already using Cartridge or want to get started. Let's go!

Problems solved by Cartridge CLI

Tarantool Cartridge solves the problems of interaction and scaling that arise when several microservices are combined in a single application. Let's talk about the difficulties that might appear during development.

Getting started

So, you want to use Cartridge, and your first question is, "How do I start my application?" At the very least, you need to implement an entry point. But it will only solve part of the problem. Later, you'll have to figure out how to start Cartridge applications in general. Cartridge CLI contains a ready-made application template. That template includes all the files required to start and configure your cluster. You don't have to worry about creating the right files with the right contents for your project. Besides, the standard template is easy to change to suit your preferences.

Building, starting, and configuring the application

To build your application, you have to install at least Cartridge, which is a Lua package. At most, you may need a dozen more dependencies, which can also have the form of Lua packages. Of course, you could write your own build script and use it for all your applications. But there is no need to reinvent the wheel. Let's say you've built the application successfully. Now you want to run the instances locally and configure your application: create replica sets, assign them roles, etc. In Cartridge CLI, you can do all that in one line with just three commands.

Configuring replica sets and failover

Yes, you can always use the GUI. For some, it might be a huge plus. But I still want to highlight the advantages of Cartridge CLI, and here is why:

  • To configure a cluster via the GUI, you have to open your browser and click N times. To do the same using the CLI, you only have to type one command. At the very least, it saves time.
  • If you use the GUI to reset your cluster's configuration settings to default and then set it up again, you'll have to repeat the same steps and click N more times — instead of running a single command.
  • You can build the minimal cluster configuration just once, save it to a file, and commit it to a repository. After that, anyone (including you with each cluster restart) can bring up the configured cluster with one command.
  • Maybe you just don't like using a GUI.

    Application packaging

    Imagine that you wrote an application and want to send it to a customer as an RPM package. First, you'll need to create a .spec file, set up the rpmbuild utility, and only then begin building the package. Tiring, isn't it? Cartridge CLI offers a unified process of application packaging, which supports common formats such as deb, rpm, tgz, and Docker image. There are many packaging options that simplify the process. With that, you don't need to worry about packaging at all. Just call one time-saving command.

    Creating and starting your first application

    First, we need to install Cartridge CLI. On Debian or Ubuntu:
    curl -L https://tarantool.io/IMYxqhi/release/2.7/installer.sh | bash
    sudo apt install cartridge-cli
    
    On CentOS, Fedora, or ALT Linux:
    curl -L https://tarantool.io/IMYxqhi/release/2.7/installer.sh | bash
    sudo yum install cartridge-cli
    
    On macOS:
    brew install cartridge-cli
    
    To make sure the installation was successful, enter the following command:
    cartridge version
    
    If everything goes well, you'll see this message: cartridge-version Ignore the warning since we haven't created the project yet (therefore, we haven't installed Cartridge). The cartridge create command creates a Cartridge application based on the standard template:
    cartridge create --name myapp && cd myapp
    
    cartridge-create The standard application template contains the following files:
    ### Application files
    ├── README.md
    ├── init.lua
    ├── stateboard.init.lua
    ├── myapp-scm-1.rockspec
    ├── app
    │   ├── admin.lua         
    │   └── roles           
    ### Building and packaging
    ├── cartridge.pre-build
    ├── cartridge.post-build
    ├── Dockerfile.build.cartridge
    ├── Dockerfile.cartridge
    ├── package-deps.txt
    ├── systemd-unit-params.yml
    ### Local application launch
    ├── instances.yml
    ├── replicasets.yml
    ├── failover.yml
    ├── .cartridge.yml
    ├── tmp
    ### Testing
    ├── deps.sh
    ├── test
    │   ├── helper.lua
    │   ├── integration
    │   └── unit
    
    If you don't like the standard template, you can create your own template and use it instead:
    cartridge create --from mytemplate --name mycustomapp && cd mycustomapp
    
    A local Git repository will be initialized in the project root directory, containing the initial commit: git-log To start the instances, let's build our application:
    cartridge build
    
    The tarantoolctl utility is used to build applications. It installs all the necessary dependencies, including Cartridge. The dependencies are defined in the file myapp-scm-1.rockspec. If your project requires another dependency, add it to that .rockspec file and run cartridge build afterward. The project also contains the file cartridge.pre-build. It is executed before the dependencies are installed. In this way, you can install non-standard rocks modules using the same tarantoolctl utility:
    #!/bin/sh
    tarantoolctl rocks make --chdir ./third_party/my-custom-rock-module
    
    Let's make sure the project was successfully built and all dependencies were installed: cartridge-version-rocks The Cartridge version and the project's rocks list are displayed. To see the information about the rocks, set the --rocks flag. Use the file instances.yml to configure your instances (name, port, etc.). Describe them in the following format <app name>.<instance name>. This file is included in the standard application template:
    myapp.router:
    advertise_uri: localhost:3301
    http_port: 8081
    myapp.s1-master:
    advertise_uri: localhost:3302
    http_port: 8082
    myapp.s1-replica:
    advertise_uri: localhost:3303
    http_port: 8083
    myapp.s2-master:
    advertise_uri: localhost:3304
    http_port: 8084
    myapp.s2-replica:
    advertise_uri: localhost:3305
    http_port: 8085
    
    To start the instances described in this file, use the cartridge start command:
    cartridge start -d 
    # Make sure that all application instances have successfully started
    cartridge status
    
    cartridge-start-status You don't have to start all instances at once. For example, you can start only s1-master:
    cartridge start s1-master -d 
    cartridge status s1-master
    
    The entry point for our application is init.lua. This is the file that launches cartridge start under the hood. The standard application template contains the folder tmp in the root directory.
  • tmp/run is the directory containing PIDs of instance processes as well as their socket files.
  • tmp/data contains instance data.
  • tmp/log contains instance logs.

You can change the standard paths to these directories by setting new ones in .cartridge.yml or using special flags with cartridge start. To run instances in the background, use the -d flag. This will also save logs to a file, so we'll be able to display them with cartridge log: cartridge-log Using the --stateboard flag or setting stateboard: true in the .cartridge.yml configuration file, you can also start an isolated instance of Tarantool to use it as a state provider for stateful failover. I suggest you always run a stateboard instance, which you can later use to configure failover. The standard application template already contains the stateboard: true setting. However, you can always remove it from the configuration file if necessary.

Configuring the topology

cartridge replicasets allows changing the cluster topology in various ways. For example, it can apply the topology specified in the configuration file. With this command, you can also save cluster configuration to a file. A standard template application built with cartridge create already contains replicasets.yml, the file to configure the basic topology of our cluster:

cartridge replicasets setup --bootstrap-vshard
# Make sure the topology was configured successfully
cartridge replicasets list

cartridge-replicasets That's it! With a single command, we configured the topology and also enabled sharding. Pretty cool, huh? Let's take a closer look at replicasets.yml:

router:
  instances:
  - router
  roles:
  - failover-coordinator
  - vshard-router
  - app.roles.custom
  all_rw: false
s-1:
  instances:
  - s1-master
  - s1-replica
  roles:
  - vshard-storage
  weight: 1
  all_rw: false
  vshard_group: default
s-2:
  instances:
  - s2-master
  - s2-replica
  roles:
  - vshard-storage
  weight: 1
  all_rw: false
  vshard_group: default

The file contains three replica sets named router, s-1, and s-2.

  • In the instances block, the instances in each replica set are defined. The names of the instances must correspond with the names defined in instances.yml.
  • The roles block defines roles for each replica set.
  • weight is the vshard weight of a replica set.
  • all_rw is a flag that makes all the instances in a replica set readable and writable.
  • vshard_group is the name of the vshard group that the replica set belongs to.

If, for some reason, you want to set it all up manually, without using replicasets.yml, there are more cartridge replicasets options:

# Join s1-master and s1-replica into the replica set s-1
cartridge replicasets join --replicaset s-1 s1-master s1-replica
# Add a router replica
cartridge replicasets join --replicaset router router
# Look at the currently available roles and choose the most suitable for each replica
cartridge replicasets list-roles
# Add the role vshard-storage for the replica set s-1
cartridge replicasets add-roles --replicaset s-1 vshard-storage
# Add roles for the router replica
cartridge replicasets add-roles \
  --replicaset router \
  vshard-router app.roles.custom failover-coordinator metrics
# Finally, bootstrap vshard
cartridge replicasets bootstrap-vshard
# Look at the replica set configuration
cartridge replicasets list

cartridge-replicasets-2 To reset cluster configuration, use the following commands:

cartridge stop
cartridge clean

Failover configuration

After configuring the cluster topology, let's set up failover:

cartridge failover setup
# Display failover state
cartridge failover status

cartridge-failover The cartridge failover setup command applies the configuration defined in failover.yml, a file in our application's root directory.

mode: stateful
state_provider: stateboard
stateboard_params:
  uri: localhost:4401
  password: passwd

There are three failover options: eventual, stateful, and disabled.

  • eventual and disabled don't require any additional settings.
  • stateful requires defining a state provider in the state_provider field and specifying its parameters. Currently, stateboard and etcd2 providers are supported.

To learn more about failover structure, check our documentation. Read the detailed description of the cartridge failover command to learn about all the possible failover parameters. You can also enter failover configuration options right in the command line with cartridge failover set:

cartridge failover set stateful \
  --state-provider etcd2 \ 
  --provider-params '{"lock_delay": 15}'

To turn failover off, use the following commands:

cartridge failover disable
# or
cartridge failover set disabled

Connecting to instances

You might need to connect to an instance and enter some commands there, for example, run cartridge.reload_roles(). That's easy! With cartridge enter, you can connect to the instance via a console socket stored in run-dir. No additional parameters are needed. Just enter the name of the instance as specified in instances.yml:

cartridge enter instance-name

cartridge-enter You can also use cartridge connect to connect to an instance. The difference is that cartridge connect lets you specify the instance address or path to the UNIX socket.

cartridge connect localhost:3301 \
  --username admin \
  --password secret-cluster-cookie
# or
cartridge connect admin:secret-cluster-cookie@localhost:3301

Packaging the application

The cartridge pack <type> command is used for application packaging. There are four packaging options available at the moment:

  • deb — a DEB package
  • rpm — an RPM package
  • tgz — a TGZ archive
  • docker — a Docker image

For example, to package your app into a TGZ archive, use the following command:

cartridge pack tgz

cartridge-pack-tgz How do you build an RPM or DEB Cartridge application package on macOS? Unfortunately, you can't do this by simply calling cartridge pack rpm|deb. The reason is, the packed application will contain rocks and executables that can't be used on Linux. Here is when the flag --use-docker comes to the rescue. It enables building in Docker:

cartridge pack deb --use-docker

In addition to --use-docker, cartridge pack has many other useful options. Let's have a look at the most interesting ones.

Add dependencies to the package

Let's add a dependency to our RPM or DEB package. Let it be unzip:

cartridge pack deb --deps unzip>=6.0

Alternatively, you can define the dependencies for your package in package-deps.txt, a file in the application root directory:

unzip==6.0
neofetch>=6,<7
gcc>8

Now, after you've packed the application with cartridge pack deb, your package will contain unzip, neofetch, and gcc. You can use a different file to specify your dependencies. Provide the path to that file with the --deps-file flag:

cartridge pack rpm --deps-file=path-to-deps-file

Adding pre- and post-build scripts

What if your packaging process requires creating a file or directory or even installing a utility — that is, you need to make changes to the packaging script? The files preinst.sh and postinst.sh serve exactly this purpose.

Provide only absolute paths to executables in the pre- and post-build scripts. Alternatively, you can use /bin/sh -c '':

/bin/sh -c 'touch file-path'
/bin/sh -c 'mkdir dir-path'

Use the --preinst and --postinst flags to generate files with arbitrary names:

cartridge pack rpm \
  --preinst=path-to-preinst-script \
  --posints=path-to-posinst-script

The scripts only work for RPM and DEB package builds.

Caching paths

Each time packaging starts, the application is built from scratch. All the rocks dependencies are rebuilt. To avoid that and reduce repackaging time, you can cache paths using the file pack-cache-config.yml:

- path: '.rocks':
  key-path: 'myapp-scm-1.rockspec'
- path: 'node_modules':
  always-cache: true
- path: 'third_party/custom_module':
  key: 'simple-hash-key'

Let's take a closer look at the parameters in this configuration file:

  • path is the path from the project root directory to the cached file or directory.
  • key-path is the path to the file, the contents of which will be used as the cache key. In the example above, myapp-scm-1.rockspec works as the cache key for the .rocks path. If you change the file, there will be no cache hit, and all the application rocks will be built anew.
  • always-cache indicates that the specified path must always be cached regardless of cache keys.
  • key is a simple cache key in the string format.

The standard application template already contains one cached path:

- path: '.rocks'
  key-path: myapp-scm-1.rockspec

I recommend always caching the contents of the .rocks directory based on what's inside the .rockspec file. One path can only have one cache. Suppose you have the .rocks directory cached, then you change the key and start packaging the application. At that moment, the old .rocks cache is deleted and replaced with a new one, based on a new key. To turn off path caching, use the --no-cache flag. The full list of cartridge pack options can be found in the command documentation.

Packaging in detail

cartridge pack not only packages the application but also builds it the same way as cartridge build. By default, the application is built in a temporary directory, ~/.cartridge/tmp. You can change it to another directory by setting the CARTRIDGE_TEMPDIR environment variable.

  • If this directory doesn't exist, it will be created, used to build the application, and then deleted.
  • Otherwise, the application will be built in the CARTRIDGE_TEMPDIR/cartridge.tmp directory.

The temporary build directory containing the source files of your application is created in three steps:

  1. The contents of the application directory are copied to the temporary directory. After that, untracked files are removed with git clean -X -d -f, and the .rocks and .git directories are deleted.

  2. The application is built in the cleaned directory.

  3. The cartridge.post-build script is executed, if it exists.

The project root contains cartridge.post-build, a script designed to delete build artifacts from the resulting package. After the application is built, special files are generated in the temporary directory, such as VERSION and VERSION.lua that contain the application version. For RPM and DEB builds, the systemd and tmpfiles directories are initialized. Then the application is packed. Read more about the contents and further usage of the resulting RPM and DEB packages. Also, check out our documentation on working with Docker images.

Conclusion

Cartridge CLI is a convenient unified application managing interface that allows you not to reinvent the wheel. In this article, I shared how to use CLI to manage your local Tarantool Cartridge application most efficiently and easily. We learned how to start the application, configure its topology and failover, package it, and connect to specific instances. There is another command in Cartridge CLI that I haven't yet talked about. cartridge admin makes it easier for developers to write and maintain operational cases, increase operation reuse, and optimize delivery to production. If something went wrong or you want to share how to improve the product, you can always create an issue in our GitHub repository. We are ready to help and open to suggestions!

Get Tarantool on our website

Get help in our telegram channel