Spring Boilerplate for Back-End using Java and Spring Framework.
- Check the logs;
- Test the dependencies and execution locally;
- Run automated tests;
- If necessary, merge with a hotfix on git;
- Rebuild the project and restart the service;
- Java: programming laguage;
JDK: Java Development Kit (compiler, runtime, VM...);
- Maven: Java project manager;
- Spring Framework: Multi-task framework that can be used for:
- Boot: Spring initial setup;
- Web: HTTP server creation and consume;
- JPA: SQL and NoSQL databases management by abstraction;
- PostgreSQL: Relational database;
- Kafka: Event managing plataform, including:
Kafka Cluster: Event brokers cluster;
Message Broker: Message topics server;
Producers and Consumers: Event messages producers and consumers;
Kafka Streams: Event streamming; - Log4J: Custom logger with appenders;
- Docker: Services isolation and process resources management with containers;
- Junit: Testing framework;
To run this project, is recomended to use JDK version 17 or higher installed and these libraries: Apache Kafka 3.4.0 and Apache Maven 3.5.0.
This project has created using:
# create Maven project
$ mvn archetype:generate -DgroupId=com.flightmanager -DartifactId=Flight_Manager_System -DarchetypeVersion=1.4 -DinteractiveMode=false
- Install project dependencies
# install dependencies
$ mvn install
# reinstall dependencies
$ mvn dependency:purge-local-repository
# recompile project
$ mvn clean compile
- Start Docker containers;
- Mock external services;
- Creat database entities and populate registers;
- Start HTTP REST API;
- Start TCP WebSocket;
- Send message to Queue;
- Receive message from Queue;
- Copy dotenv file
cp envs/.env.local ./.env # copy development local example
source ./.env # load envs on shell session
- Initialize the composefile (
docker-compose.yml
) available on project root folder.
# create and run essentials docker containers in background
docker-compose up -d zookeeper kafka database
# or
# create and run all docker containers in background
docker-compose up -d
# delete all containers and volumes
docker-compose down -v
After installing the JDK, you can run the project by typing the following commands in terminal:
# run project
$ mvn exec:java -Dexec.args="arg1 arg2 arg3"
# run tests
$ mvn test
# create JAR file
$ mvn package
# start Zookeeper server manager
$ zookeeper-server-start infra/zookeeper/zookeeper.properties
# start Kafka server
$ kafka-server-start infra/kafka/server.properties
# create Kafka topic
$ kafka-topics --bootstrap-server=localhost:9092 --create --topic=topic01 --partitions=3 --replication-factor=1
# create Kafka producer
$ kafka-console-producer --bootstrap-server=localhost:9092 --topic=topic01 --property="parse.key=true" --property="key.separator=:" # separator is ':'
# create Kafka consumer (with group)
$ kafka-console-consumer --bootstrap-server=localhost:9092 --topic=topic01 --group=G1
# list topics
$ kafka-topics --bootstrap-server=localhost:9092 --list
# get topic details
$ kafka-topics --bootstrap-server=localhost:9092 --describe --topic=topic01
# list consumers groups
$ kafka-consumer-groups.sh --bootstrap-server=localhost:9092 --list
# get consumers group details
$ kafka-consumer-groups.sh --bootstrap-server=localhost:9092 -—describe --group=G1
- localhost:3000 - Application Interface (API)
/
- WebSocket Root Endpoint/api
- REST Root Endpoint/api/docs
- Swagger API Documentation (Page)
- localhost:4000 - Mocked Service Page
- localhost:8080 - Adminer Page
- localhost:8081 - Kafdrop Page
- Project
- Architecture
- Dependencies
- Docs
- Infra Steps (update)
- Events Flux
- Database Schema
- Swagger
- Infra
- Envs
- Scripts
- Migrations
- Producers
- Consumers
- Tests
- Features
- Infra
- Hibernate + JPA
- Kafka & Kafka Streams
- Multi Threads
- Domain
- App
- Usecases
- Strategies
- API
- Controllers
- DTOs & Validations
- Auth
- Controllers
- Infra
- SQL
- Manual Queries
- Index
- Views
- Transactions