Author: ceracm

Build Eventuate Applications with AWS Lambda and Serverless

We are super excited to announce that you can now develop and deploy Eventuate applications using AWS Lambda and Serverless Framework. AWS Lambda functions have always been able to use the Eventuate APIs to create, update and find aggregates.  In fact, the Eventuate Signup page is a serverless application. What is new is that AWS Lambda functions can now subscribe to events published by the Eventuate event store. As a result, Eventuate applications can now be completely serverless.

We have written a plugin for the Serverless framework that makes it super easy. You simply specify in serverless.yml the events that your lambda is interested in. When serverless deploys your lambda, the plugin tells Eventuate to dispatch those events to your lambda. The following diagram shows how this works.


For more information, please see the following examples:

Eventuate Local now supports snapshots

Event sourcing persists domain objects as a sequence of (state changing) events. To load a domain object from an event store, an application must load and replay those events. Long-lived domain objects can potentially have a huge number of events, which would make loading them very inefficient.

The solution is to periodically persist a snapshot of the domain object’s state. The application only has to load the most recent snapshot and the events that have occurred since that snapshot was created.

The Eventuate API now supports a snapshot mechanism. To create snapshots for a domain object, you simply define a SnapshotStrategy in the Spring application context. A SnapshotStrategy defines two methods:

  • possiblySnapshot() – invokes when an AggregateRepository updates an aggregate. It can decide to create a snapshot based on, for example, the number of events
  • recreateAggregate() – recreates an aggregate from a saved snapshot

Currently, only Eventuate Local supports snapshots. Eventuate SaaS will support them soon. For more information, see Defining snapshot strategies in Java


Introducing the Eventuate Local Console

We are super excited to announce that the Eventuate Local event store now has a simple console. It lets you browse the aggregate types and view aggregate instances. You can also see  a real-time view of the events as they are saved in the event store.

Here is a screenshot showing the TodoAggregate instances:


Here is a screenshot showing recent events:


The UI is implemented using ReactJS UI and a NodeJS-based server. It is packaged as a Docker container image and can be run by defining the following container in your project’s docker-compose.yml:

 image: eventuateio/eventuateio-local-console:0.12.0
 - mysql
 - zookeeper
 - "8085:8080"
 SPRING_DATASOURCE_URL: jdbc:mysql://mysql/eventuate

Run the Eventuate Todo example application to see it in action.

The microservice architecture is a means to an end: enabling continuous delivery/deployment

A while ago we wrote that Successful software development = organization + process + architecture and described how the microservice architecture has a key role to play when developing large, complex applications. It is important to remember, however, that the microservice architecture is merely a means to an end. The ultimate goal is to deliver better software faster. Today, that invariably means continuous delivery – for an installed product – or continuous deployment for an -aaS product.

To clarify the goal of the microservice architecture, we decided to redraw the triangle with continuous delivery/deployment at the apex. The two corners at the base of the triangle are small, agile, autonomous, cross functional teams, and the microservice architecture.


The microservice architecture enables teams to be agile and autonomous. Together, the team of teams and the microservice architecture  enable continuous delivery/deployment.

Eventuate Local: Event Sourcing and CQRS with Spring Boot, Apache Kafka and MySQL

Eventuate™  is a platform for developing transactional business applications that use the microservice architecture. Eventuate provides an event-driven programming model for microservices that is based on event sourcing and CQRS.

The benefits of Eventuate include:

  • Easy implementation of eventually consistent business transactions that span multiple microservices
  • Automatic publishing of events whenever data changes
  • Faster and more scalable querying by using CQRS materialized views
  • Reliable auditing for all updates
  • Built-in support for temporal queries

Eventuate™ Local is the open-source version of Eventuate™. It has the same client-framework API as the SaaS version but a different architecture. It uses a MySQL database to persist events, which guarantees that an application can consistently read its own writes. Eventuate Local tails the MySQL transaction log and publishes events to Apache Kafka, which enables applications to benefit from the Apache Kafka ecosystem including Kafka Streams, etc.

This diagram shows the architecture:

Eventuate Local currently only supports Spring Boot applications but we plan to add support for other frameworks and languages over time.

Learn more about Eventuate Local

To find out more about Eventuate Local:

JavaOne 2016: Handling Eventual Consistency in JVM Microservices with Event Sourcing

Last week at JavaOne 2016, Chris Richardson, founder of Eventuate, Inc, and Kenny Bastani, developer advocate at Pivotal, gave a talk on using Event Sourcing to maintain data consistency in a microservices architecture.

Example code

Here is the code for the sample Spring Boot application that Kenny developed for the talk. What is especially exciting is that the microservices demo is built using Eventuate!


Here are the slides:

Learn more about Eventuate

To find out more about Eventuate:


The new Eventuate Java Client

We are super excited to announce that we have started migrating the example applications to the new Eventuate Java client. The highlights of this new client include:

  • Open-source with Javadoc (still work in progress) and source jars
  • Written in Java 8 instead of Scala with a Java wrapper
  • Fully reactive
  • Better modularity, which will make it easier to support more than just Spring applications

So far we have migrated the Money Transfer and the Customer and Orders examples.

For more information, please see the revised getting started guide.

Developing microservices with #DDD aggregates (SpringOne platform, #s1p)

Last week at Spring One Platform, our founder Chris Richardson gave a talk on developing microservices with Domain-Driven Design aggregates.



Here are the slides:

Example code

Here is the code for the Orders and Customers example.

Learn more about Eventuate

To find out more about Eventuate:

Deploying Spring Boot microservices using Docker 1.12 orchestration – part 1

Docker 1.12 was announced earlier this week at Dockercon. Built-in orchestration was one of the particularly interesting new features. This blog post describes how to deploy one of the Eventuate example applications using Docker orchestration.

Install Docker 1.12 on AWS

The first step was to launch three EC2 instances (a master and two worker nodes) and install Docker 1.12.0. We used an Ubuntu  15.10 AMI and ran this script to install Docker 1.12-rc2 on each instance.

You must also had to configure the security group to allow the nodes to communicate as described in the orchestration tutorial:

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • TCP and UDP port 4789 for overlay network traffic

You must also allow TCP traffic on port 8080 from your local machine in order to be able to access the running application.

Setting up the Swarm

The next step is to set up the Docker swarm, which is a cluster of Docker engines. To do that run docker swarm init on your master node (pick one). Then, run docker swarm join <masterIp>:2377 on each worker node (the other ones). You can verify that the swarm is set up by running the command docker node ls on the master. This command lists the nodes that comprise the swarm.

Build the application

Once the docker swarm is setup you need to build the Kanban sample application on the master node:

# Install Java: 

apt-get install -y openjdk-8-jdk

# Clone the example:  

git clone

# Build it: 

cd es-kanban-board/java-server
./gradlew assemble

# Build the docker images:


Note: in order to run this application you need to get for credentials for Eventuate event store.

Create the MongoDB service

This application uses MongoDB and so lets create a MongoDB service.

First, we will create a Docker overlay network that the microservices will use to communicate with MongoDB.

docker network create -d overlay kanbannet

Next, we will create the MongoDB service:

docker service create \
  --name mongo  \
  --network kanbannet \
  --replicas 1 \
  -p 27017:27017/tcp \

The service runs the mongo 3.0.4 image. Docker orchestration ensures that one instance of Mongo will be running at all times on a node in the swarm.  The -p parameter says that the service is accessible on port 27017.  

A MongoDB client running on an EC2 instance – one of the swarm nodes or elsewhere – can connect to port 27017 on any of the nodes, i.e. masterOrWorkerNodeIpAddress:27017. The Docker routing mesh  routes traffic to the MongoDB container. A swarm service  uses the service name as the DNS name to access MongoDB, i.e. mongo:27017.

Deploy the service

The next step is to deploy the microservices. Rather than deploy the individual services lets first deploy the monolithic version of the application in order to learn how one service (the Java application) connects to another service (MongoDB).

Here is the command to create that service:

docker service create \
  --name standalone \
  --network kanbannet \
  -e SPRING_DATA_MONGODB_URI=mongodb://mongo:27017/kanban \
  -p 8080:8080/tcp \

This service runs the eventuate_kanban_standalone_service image that was built earlier. The -e options specify the eventuate credentials that you received when you signed up and the connection URL for mongodb. Note that the URL uses the mongo hostname, which is the name of the MongoDB service, and is resolved using Docker built-in DNS server.

The service is accessible via port 8080 on every node. Provided that the security group is configured to allow port 8080 traffic  you should be able to access the application from your desktop/laptop using the URL http://<ec2-instance-hostname&gt;:8080.

You can examine the materialized MongoDB views using the MongoDB CLI. You can run that using the following command:

docker run --rm -i -t --net=host mongo:3.0.4 /usr/bin/mongo \
   --host <masterOrWorkerNodeIpAddress>

Note the user of the –net=host option. I discovered that was required in order for the MongoDB client to connect to the server.

What is next

In a later blog post we’ll describe how we deployed the individual microservices

Learn more about Eventuate

To find out more about Eventuate: