How to use Linux vsock for fast VM communication

I’ve been experimenting with different ways of building Linux VM images lately, but to make these images practical, they need to interact with the outside world. At a minimum, they need to communicate with the host machine.

vsock This is a technology that is specifically designed with VMs in mind. This eliminates the need for a TCP/IP stack or network virtualization to enable communication with or between VMs. At the API level, it behaves like a standard socket but uses a special addressing scheme.

In the experiment below, we will explore vsock As a transport mechanism for a gRPC service running on a VM. We will build this project with Bezel for easy reproduction. If you want an introduction to bezels check out this post.

table of contents

Open Table of Contents

Inspiration

There are many use cases for efficient communication between a VM and its host (or between multiple VMs). A simple reason is to create a streamlined environment within the VM and issue commands via RPC from the host. This is the primary driver for using gRPC in this example, but you can easily generalize the approach shown here to build far more complex systems.

GitHub repo

The entire repository is hosted here and serves as the source of truth for this experiment. Although there may be minor discrepancies between the code blocks below and the repository, please rely on GitHub as the definitive source.

code breaking

Let’s analyze the code step by step:

external dependency

Here are the external dependencies listed as Bazel modules:

bazel_dep(name = "rules_proto", version = "7.1.0")
bazel_dep(name = "rules_cc", version = "0.2.14")
bazel_dep(name = "protobuf", version = "33.1", repo_name = "com_google_protobuf")
bazel_dep(name = "grpc", version = "1.76.0.bcr.1")

This is largely self-explanatory. protobuf The repository is used for C++ proto-generation rules, and grpc Provides a monorepo for Bazel rules to generate gRPC code for the C++ family of languages.

gRPC library generation

The following bezel targets generate the required C++ Protobuf and gRPC libraries:

load("@rules_proto//proto:defs.bzl", "proto_library")
load("@com_google_protobuf//bazel:cc_proto_library.bzl", "cc_proto_library")
load("@grpc//bazel:cc_grpc_library.bzl", "cc_grpc_library")

proto_library(
    name = "vsock_service_proto",
    srcs = ["vsock_service.proto"],
)

cc_proto_library(
    name = "vsock_service_cc_proto",
    deps = [
        ":vsock_service_proto",
    ],
    visibility = [
        "//server:__subpackages__",
        "//client:__subpackages__",
    ],
)

cc_grpc_library(
    name = "vsock_service_cc_grpc",
    grpc_only = True,
    srcs = [
        ":vsock_service_proto",
    ],
    deps = [
        ":vsock_service_cc_proto",
    ],
    visibility = [
        "//server:__subpackages__",
        "//client:__subpackages__",
    ],
)

The definition of a protocol is straightforward:

syntax = "proto3";

package popovicu_vsock;

service VsockService {
  rpc Addition(AdditionRequest) returns (AdditionResponse) {}
}

message AdditionRequest {
  int32 a = 1;
  int32 b = 2;
}

message AdditionResponse {
  int32 c = 1;
}

It simply exposes a service capable of adding two integers.

server implementation

BUILD The file is straightforward:

load("@rules_cc//cc:defs.bzl", "cc_binary")

cc_binary(
    name = "server",
    srcs = [
        "server.cc",
    ],
    deps = [
        "@grpc//:grpc++",
        "//proto:vsock_service_cc_grpc",
        "//proto:vsock_service_cc_proto",
    ],
    linkstatic = True,
    linkopts = [
        "-static",
    ],
)

We want a statically linked binary to run on the VM. This option simplifies deployment, allowing us to drop a single file on the VM.

The code is largely self-explanatory:

#include 
#include 
#include 

#include 
#include "proto/vsock_service.grpc.pb.h"

using grpc::Server;
using grpc::ServerBuilder;
using grpc::ServerContext;
using grpc::Status;
using popovicu_vsock::VsockService;
using popovicu_vsock::AdditionRequest;
using popovicu_vsock::AdditionResponse;

// Service implementation
class VsockServiceImpl final : public VsockService::Service {
  Status Addition(ServerContext* context, const AdditionRequest* request,
                  AdditionResponse* response) override {
    int32_t result = request->a() + request->b();
    response->set_c(result);
    std::cout << "Addition: " << request->a() << " + " << request->b()
              << " = " << result << std::endl;
    return Status::OK;
  }
};

void RunServer() {
  // Server running on VM (guest)
  // vsock:-1:9999 means listen on port 9999, accept connections from any CID
  // CID -1 (VMADDR_CID_ANY) allows the host to connect to this VM server
  std::string server_address("vsock:3:9999");
  VsockServiceImpl service;

  ServerBuilder builder;
  builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
  builder.RegisterService(&service);

  std::unique_ptr<Server> server(builder.BuildAndStart());
  std::cout << "Server listening on " << server_address << std::endl;

  server->Wait();
}

int main() {
  RunServer();
  return 0;
}

The only part that needs clarification is server_address, vsock: The prefix indicates that we are using vsock As a transport layer. gRPC supports a variety of transports, including TCP/IP and Unix sockets.

The number 3 CID, or context idIt works similar to IP address, Some CIDs have special meanings, For example, CID 2 represents the VM host itself; If the VM needs to connect to someone vsock Socket on the host, it targets CID 2. CID 1 is reserved for the loopback address. Generally, VMs are assigned CID starting from 3.

9999 It is simply a port number, which works just like TCP/IP.

customer implementation

BUILD The file, again, is quite simple:

load("@rules_cc//cc:defs.bzl", "cc_binary")

cc_binary(
    name = "client",
    srcs = [
        "client.cc",
    ],
    deps = [
        "@grpc//:grpc++",
        "//proto:vsock_service_cc_grpc",
        "//proto:vsock_service_cc_proto",
    ],
    linkstatic = True,
    linkopts = [
        "-static",
    ],
)

And C++ code:

#include 
#include 
#include 

#include 
#include "proto/vsock_service.grpc.pb.h"

using grpc::Channel;
using grpc::ClientContext;
using grpc::Status;
using popovicu_vsock::VsockService;
using popovicu_vsock::AdditionRequest;
using popovicu_vsock::AdditionResponse;

class VsockClient {
 public:
  VsockClient(std::shared_ptr<Channel> channel)
      : stub_(VsockService::NewStub(channel)) {}

  int32_t Add(int32_t a, int32_t b) {
    AdditionRequest request;
    request.set_a(a);
    request.set_b(b);

    AdditionResponse response;
    ClientContext context;

    Status status = stub_->Addition(&context, request, &response);

    if (status.ok()) {
      return response.c();
    } else {
      std::cout << "RPC failed: " << status.error_code() << ": "
                << status.error_message() << std::endl;
      return -1;
    }
  }

 private:
  std::unique_ptr<VsockService::Stub> stub_;
};

int main() {
  // Client running on host, connecting to VM server
  // vsock:3:9999 means connect to CID 3 (guest VM) on port 9999
  // CID 3 is an example - adjust based on your VM's actual CID
  std::string server_address("vsock:3:9999");

  VsockClient client(
      grpc::CreateChannel(server_address, grpc::InsecureChannelCredentials()));

  int32_t a = 5;
  int32_t b = 7;
  int32_t result = client.Add(a, b);

  std::cout << "Addition result: " << a << " + " << b << " = " << result
            << std::endl;

  return 0;
}

it’s all going together

The bezel shines here. All you need is a working C++ compiler on your host system. Bazel automatically fetches and builds everything else, including the Protobuf compiler.

To get the statically linked server binary:

bazel build //server

Similarly, for the customer:

bazel build //client

To create the VM image, I used debootstrap on one ext4 Image, as described in this post on X:

This is a quick, albeit hacky solution to creating a runable Debian instance.

Next, I copied the newly created server binary into /opt within the image.

Now, the VM can be booted directly into the server binary as soon as the kernel is running:

 qemu-system-x86_64 -m 1G -kernel /tmp/linux/linux-6.17.2/arch/x86/boot/bzImage \
  -nographic \
  -append "console=ttyS0 init=/opt/server root=/dev/vda rw" \
  --enable-kvm \
  -smp 8 \
  -drive file=./debian.qcow2,format=qcow2,if=virtio -device vhost-vsock-pci,guest-cid=3

As shown in the last line, a virtual device is connected acting as a QEMU VM. vsock Networking hardware, configured with CID 3.

QEMU output shows:

[    1.581192] Run /opt/server as init process
[    1.889382] random: crng init done
Server listening on vsock:3:9999

To send RPC from host to server, I ran the client binary:

bazel run //client

The output confirmed the result:

Addition result: 5 + 7 = 12

Accordingly, the server output is displayed:

Addition: 5 + 7 = 12

We have successfully implemented RPC from host to VM!

under the hood

I haven’t thought deeply about low-level system APIs vsockS, because frameworks generally do away with it. However, vsockIt is very similar to TCP/IP sockets. Once created, they are used in the same way, although the creation API varies. Information on this is easily available online.

conclusion

I believed it was more valuable to focus on a high-level RPC system vsock Instead of raw socket. With gRPC, you can implement a structured RPC on a server running inside a VM. This opens the door to running interesting applications in sealed, isolated environments, allowing you to easily connect different OSes (for example, a Debian host and an Arch guest) or any platform that supports it. vsockAdditionally, gRPC allows you to write clients and servers in many different languages ​​and technologies, This is achieved without network virtualization, resulting in increased efficiency,

I hope it was fun and useful for you too.

Please consider following me on X and LinkedIn for further updates.





<a href

Leave a Comment