4/21/2026

Linux for DevOps (Beginners)

Linux for DevOps (Beginners)


1. Linux Fundamentals


1.1 What is Open Source?

Open-source software is software whose source code is freely available for anyone to view, modify, and distribute. Linux is built on open-source principles.

  • Source code is publicly accessible
  • Community-driven development and contributions
  • Free to use, modify, and redistribute

Examples: Linux, Python, Firefox, Android


1.2 What is Linux?

Linux is a free, open-source operating system kernel created by Linus Torvalds in 1991. It powers everything from smartphones to supercomputers.

  • Linux is technically a kernel, not a full OS
  • Full OS = Linux Kemel + GNU tools + package manager + desktop environment
  • Popular distributions (distros): Ubuntu, Fedora, Debian, CentOS, Arch Linux
  • Used in servers, cloud, Android, loT, supercomputers


1.3 Linux vs UNIX

Linux =Free, open-source, kemel by Linus Torvalds (1991)

UNIX = Proprietary OS from Bell Labs (1969), not free

macos = UNIX-based (BSD), proprietary by Apple

Similarity = Both follow POSIX standards; Linux was inspired by UNIX


1.4 Why Use Linux?

  • Free and open source - no licensing costs
  • Highly secure - fewer viruses, strong permission model
  • Stable and reliable - servers run for years without reboots
  • Lightweight- runs on old hardware
  • Industry standard - most servers, cloud, and DevOps tools run on Linux
  • Customizable - choose your own components and desktop

1.5 Linux vs Windows

Cost => Linux is free; Windows requires a license
Security => Linux is free; Windows requires a license
Usage => Linux dominates servers; Windows dominates desktops
CLI => Linux is CLI-first; Windows is GUI-first
File System = > Linux: ext4, XFS; Windows: NTFS, FAT32
Software => Linux uses package managers; Windows uses installers (.exe)

1.6 What is a Kernel?

The kernel is the core of the operating system. It manages hardware resources and allows
software to communicate with hardware.
  • Manages CPU, memory, and I/O devices
  • Provides system calls for programs to request services
  • Acts as a bridge between applications and hardware
  • Linux kernel is monolithic - all kernel services run in kemel space

Installing Ubuntu via WSL (Windows Subsystem for Linux)
WSL lets you run a Linux terminal directly on Windows without a VM.
  • Open PowerShell as Administrator
  • Run: wsl -- install
  • Restart your PC
Open Ubuntu from the Start Menu
 - Set username and password on first launch

Useful WSL commands:
ws1 -- list -- verbose
wsl -- set-default-version 2

# Open Linux files in Windows Explorer

# List installed distros
# Use WSL 2

explorer.exe .


3. Basic Terminal Commands
3.1 Navigation


pwd

Print Working Directory - shows current location

1s
List files and directories

1s -la
List all files (including hidden) with details

od <dir>
Change directory

cd ..
Go up one directory level

cd ~
Go to home directory

cd /
Go to root directory



3.2 File and Directory Operations

mkdir <name>
Create a new directory

mkdir -p a/b/c
Create nested directories

touch <file>
Create an empty file or update timestamp

rm <file>
Remove a file

rm -r <dir>
Remove directory and contents recursively

rm -zf <dir>
Force remove without prompts (use carefully!)

cp <src> <dst>
Copy a file

mv <arc> <dst>
Move or rename a file

cat <file>
Display file contents

less <file>
Scroll through file contents

nano <file>
Open file in nano text editor

vim <file>
Open file in vim text editor


3.3 File Permissions - chmod
Linux file permissions control who can read, write, or execute files.

Permission format: rwxrwxrwx (owner / group / others)

r=4
Read permission

w= 2
Write permission

x= 1
Execute permission

chmod 755 file
Owner: rwx, Group: r-x, Others: r-x

chmod 644 file
Owner: rw-, Group: r-, Others: r-

chmod +x file
Add execute permission for all

chmod -w file
Remove write permission



3.4 Links
Links allow multiple references to the same file.

ln <src> <link>
Create a hard link - points to same inode

ln -s <src> <link>
Create a soft (symbolic) link - like a shortcut

alias 11='1s -la'
Create a command alias (session only)

Key difference: Deleting the original breaks a soft link but NOT a hard link.



4. File Handling & Text Processing
4.1 head & tail

head <file>
Show first 10 lines of a file

head -n 20 <file>
Show first 20 lines

tail <file>
Show last 10 lines

tail -f <file>
Follow file in real-time (great for logs)



4.2 wc -Word Count

we <file>
Show lines, words, and bytes

wc -1 <file>
Count lines only

wc -w <file>
Count words only

wc -c <file>
Count bytes/characters only

4.3 sort

sort <file>
Sort lines alphabetically

sort -r <file>
Sort in reverse order

sort -n <file>
Sort numerically

sort -u <file>
Sort and remove duplicates

sort -k2 <file>
Sort by the 2nd column

4.4 grep - Search Text
grep searches for patterns in files or output.

grep 'pattern' fileSearch for pattern in file
grep -i 'pattern'
file
Case-insensitive search
grep -r 'pattern'
dir/
Recursive search in directory
grep -n 'pattern'
file
Show line numbers with results
grep -v 'pattern'
file
Show lines that do NOT match
grep -c 'pattern'
file
Count matching lines
grep -E 'pllp2' fileExtended regex - match p1 OR p2

find . -name ' *. txt'
find / -name
Find all .txt files in current dir
Find file across entire system
"file.log'
find . -type d
find . -type f
find . -mtime -7
find . -size +10M
Find only directories
Find only regular files
Files modified in last 7 days
Files larger than 10MB


4.6 awk Text processing
awk '(print $1}'Print first column of each line
file
awk '(print $1, $3)"
file
Print columns 1 and 3
awk -F',' '(print
$2}' file
Use comma as delimiter (CSV)
awk 'NR == 5' file
awk '{sumt=$1)
Print line number 5
Sum all values in column 1
END(print sum]' f

5. Disk management $ archives
df -h
du -sh <dir>
Show disk space usage (human readable)
Show size of a directory
du -h -- max-depth=1
1sblk
Show sizes of subdirectories
List block devices (disks, partitions)

5.1 Zip/Unzip
zip archive.zip
filel file2
Create a zip archive
zip -r archive.zip
dir/
Zip an entire directory
unzip archive.zipExtract a zip file
unzip -1 archive.zipList contents without extracting


5.3 tar Tape Archive

tar -cvf archive.tar   creata a tea archive

tar -xvf archive.tar
tar -czvf
archive.tar.gz diz/
Extract a tar archive
Create compressed tar.gz
tar -XzVf
archive.tar.gz
Extract tar.gz
tar -tf archive.tarList contents of tar file




6. Process management

6.1 View processes

psList processes for current user
ps auxList ALL processes with details
ps aux | grep nginx
top
htop
Find specific process
Interactive real-time process viewer
Improved interactive process viewer
pgrep <name>Find process ID by name

6.2 Killing processes
CommandDescription
kill <PID>
kill -9 <PID>
killall <name>
pkill <name>
Send SIGTERM (graceful stop) to process
Send SIGKILL (force stop) to process
Kill all processes by name
Kill process by pattern match




6.3 Background Jobs & nohup
command &Run command in background
jobsList background jobs
fg 61
bg 41
Ctrl + Z
nohup command &
nohup command >
out.log 2>61 &
Bring job 1 to foreground
Resume job 1 in background
Suspend current foreground job
Run command immune to hangups (survives logout)
Run nohup and save output to file
disown 41Remove job from shell job table





4/20/2026

Soft Skills - Do and Don't







Work Ethic

Do: Work hard without anyone asking and without complaining

Don't: Focus on number of hours outcomes are far more important


Growth Mindset

Do: Crave learning and feedback,and put those lessons to use

Don't: Be too arrogant to grow


Professionalism

Do: Act maturely and respectfully no matter who you're with

Don't: Think you can cross a line with certain people or situations


Self-Awareness

Do: Pay attention to how your words and actions come across

Don't: Make assumptions about what other people will think


Emotional Intelligence

Do: Notice your emotions and manage your responses to them

Don't: Act out of anger


Communication

Do: Be succinct, clear, and direct in writing and speaking

Don't: Think complexity makes you sound smart


Drive

Do: Dive into projects headfirst and before anyone has to ask

Don't: Depend on others for motivation - find it within


Collegiality

Do: Be kind and easy to work with

Don't: Take it too far - you can be likeable without being a pushover


Adaptability

Do: Adjust quickly when there's new information or circumstances

Don't: Fail to draw lessons from mistakes, and act on them


Dependability

Do: Keep your promises, staying true to your word every time

Don't: Make excuses


Active Listening

Do: Listen attentively, validate their points, and ask questions

Don't: Interrupt, multitask, or make things all about you


Time Management

Do: Stay organized, prioritize your tasks, and avoid distractions

Don't: Miss deadlines or take on more than you can handle


Persistence

Do: Be resilient after setbacks, continuing to move forward

Don't: Give up when it gets hard


Perceptiveness

Do: Pay attention to the mood and responses of others

Don't: Fail to make adjustments based on what you notice


Teamwork

Do: Collaborate well, sharing work, information, and credit

Don't: Try to do it all yourself - it won't work as well as you think


Integrity

Do: Tell the truth and make the ethical choice in all scenarios

Don't: Try to hide or cover up





2/10/2026

How to Integrate k6 with Xray/Jira

 

Automated Testing in Jira with Xray | Atlassian

How to Integrate k6 with Xray/Jira

Tags: k6, xray, jira, javascript, performance-testing, automation

Performance testing is a critical part of delivering scalable and reliable APIs. In this article, I’ll walk you through a simple and practical way to integrate k6 performance test results with Xray in Jira.


Why We Chose k6

Some time ago, I was assigned the task of writing a performance test for an API expected to handle a large number of requests. We needed a tool that was:

  • Easy to learn

  • Quick to implement

  • Friendly for QA engineers to contribute to

Having previously used Load Impact, I was already familiar with k6, which made it a natural choice.

Here are the main reasons we selected k6:

1. JavaScript-Based

Most QA engineers and developers on the team were already familiar with JavaScript. This eliminated the need to learn a new programming language.

2. Open Source

k6 is completely open-source. No licensing costs, and it has a very active community.

3. CI/CD Friendly

Integrating k6 into our CI/CD pipeline was straightforward and seamless.

There are many more advantages to using k6, but I’ll cover those in a separate post.


The Challenge: Sending k6 Results to Xray/Jira

After completing our performance test framework, we wanted our test results available in Jira via Xray.

Since we were already using Xray for test management, we needed a way to convert the k6 JSON report into a format compatible with Xray.

At the time, I couldn’t find a solution that worked for our specific case — so I built one.

Fortunately, k6 provides a powerful function called:

handleSummary() in k6

The handleSummary() function allows you to generate custom summary reports after a load test completes.

It gives you access to all test metrics and lets you:

  • Format the data

  • Export it as JSON

  • Print to stdout

  • Generate XML

  • Save to files

This flexibility made it possible to transform k6 results into an Xray-compatible JSON format.


The Solution: k6 → Xray JSON Converter

I created a helper script that:

  1. Takes the data object from handleSummary()

  2. Converts it into Xray’s required JSON structure

  3. Outputs a summary.json file

  4. Allows importing results into Xray

You can clone the repository here:

git clone https://github.com/skingori/k6-json-xray.git

Project Structure Recommendation

Inside your main project, create something like:

/helper /src /report

Place generator.js inside the helper folder to keep imports organized.


Prerequisites

Make sure you have the following installed:

  • Node.js

  • npm

  • k6


How It Works

If your k6 tests are organized in groups, and each group name corresponds to an Xray Test Case key, the script will automatically map the results.

For example, in Xray you may have test cases:

  • CALC-01

  • CALC-02

In your k6 test:

group('CALC-01', function() { // test code }); group('CALC-02', function() { // test code });

The script searches for these group names and assigns the test results to the matching Xray test cases.


How to Configure handleSummary()

First, import the required modules:

import { getSummary } from "./generator.js"; import { textSummary } from "https://jslib.k6.io/k6-summary/0.0.1/index.js";

If your file is inside a helper folder:

import { getSummary } from "./helper/generator.js";

Now, add the handleSummary() function at the end of your test script:

export function handleSummary(data) { return { stdout: textSummary(data, { indent: " ", enableColors: true }), "summary.json": JSON.stringify(getSummary(data, "CALC-2062", "CALC"), null, 2) }; }

What This Does

  • textSummary() → Prints a readable report to the console

  • getSummary() → Converts k6 data into Xray format

  • summary.json → Saves the formatted result

If you don’t need console output, you can remove the stdout line.


Complete Example Script

import http from 'k6/http'; import { sleep, group, check } from 'k6'; import { getSummary } from "./generator.js"; import { textSummary } from "https://jslib.k6.io/k6-summary/0.0.1/index.js"; export const options = { vus: 10, duration: '30s', }; export default function() { group('CALC-01', function() { const resp = http.get('http://test.k6.io'); check(resp, { 'status is 200': (r) => r.status === 200, }); sleep(1); }); group('CALC-02', function() { const resp = http.get('http://test.k6.io'); check(resp, { 'status is 200': (r) => r.status === 200, }); sleep(1); }); }; export function handleSummary(data) { return { stdout: textSummary(data, { indent: " ", enableColors: true }), "summary.json": JSON.stringify(getSummary(data, "CALC-2062", "CALC"), null, 2) }; }

Running the Script

Execute your test with:

k6 run script.js -e TEST_PLAN_KEY="CALC-2345" -e TEST_EXEC_KEY="CALC-0009"

What These Keys Mean

  • TEST_PLAN_KEY → Identifies the Xray Test Plan

  • TEST_EXEC_KEY → Identifies the Test Execution issue in Jira

These keys allow Xray to correctly associate the results.


Example Output (summary.json)

{ "info": { "summary": "K6 Test execution - Mon Sep 09 2024 21:20:16 GMT+0300 (EAT)", "description": "This is a k6 test with a maximum iteration duration of 4.95s, 198 passed requests and 0 failures on checks", "user": "k6-user", "startDate": "2024-09-09T18:20:16.000Z", "finishDate": "2024-09-09T18:20:16.000Z", "testPlanKey": "CALC-2345" }, "testExecutionKey": "CALC-0009", "tests": [ { "testKey": "CALC-01", "start": "2024-09-09T18:20:16.000Z", "finish": "2024-09-09T18:20:16.000Z", "comment": "Test execution passed", "status": "PASSED" }, { "testKey": "CALC-02", "start": "2024-09-09T18:20:16.000Z", "finish": "2024-09-09T18:20:16.000Z", "comment": "Test execution passed", "status": "PASSED" } ] }

This JSON file can now be imported directly into Xray.


Final Thoughts

By leveraging k6’s handleSummary() function, you can easily customize performance test outputs and integrate them into tools like Xray/Jira.

This approach allows you to:

  • Keep performance testing within k6

  • Maintain JavaScript consistency

  • Seamlessly integrate with Jira workflows

  • Automate reporting in CI/CD

If you’re using Xray and k6 together, this solution provides a simple and scalable way to bridge the gap.

9/29/2025

ANTLR 4 into a Quarkus app

 

Using ANTLR 4 with Quarkus: A Practical Guide

Why combine ANTLR + Quarkus?

  • Quarkus gives you a fast, cloud/native Java framework with REST, live reload, native image, etc.

  • ANTLR is a powerful parser generator to define grammars and generate parsers/lexers.

  • Together, you can embed a DSL, query language, rule engine, or expression evaluator into a Quarkus microservice.

Challenges / caveats to be aware of:

  • You must integrate ANTLR’s code generation into your build so that generated sources are compiled by Quarkus.

  • In Quarkus dev mode, some Maven plugins (like antlr4-maven-plugin) may not run automatically; you might need to explicitly run generate-sources first. Lightrun

  • Be mindful of version mismatches between ANTLR tool/grammar version and runtime. Quarkus has historically shipped with a specific ANTLR runtime version. GitHub+1


Example: Expression Parsing via REST

Let’s build a simple Quarkus app that:

  1. Accepts a mathematical expression via REST (e.g. "2 + 3 * 4")

  2. Uses ANTLR-generated parser to parse and evaluate the expression

  3. Returns the result


Project Structure

quarkus-antlr-example/ ├── pom.xml ├── src │ ├── main │ │ ├── antlr4 │ │ │ └── Expr.g4 │ │ └── java │ │ └── com/example │ │ ├── ExpressionEvaluator.java │ │ └── ExpressionResource.java │ └── test │ └── java │ └── com/example │ └── ExpressionResourceTest.java

Grammar: Expr.g4

Place under src/main/antlr4:

grammar Expr; prog: expr EOF ; expr : expr op=('*'|'/') expr # MulDiv | expr op=('+'|'-') expr # AddSub | INT # Int | '(' expr ')' # Parens ; INT : [0-9]+ ; WS : [ \t\r\n]+ -> skip ;

This defines a simple arithmetic grammar with precedence (multiplication/division higher than addition/subtraction).


pom.xml

Here’s a minimal pom.xml configured to run ANTLR code generation and build a Quarkus app:

<project xmlns="http://maven.apache.org/POM/4.0.0" ...> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>quarkus-antlr-example</artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.0.0.Final</quarkus.platform.version> <maven.compiler.source>17</maven.compiler.source> <maven.compiler.target>17</maven.compiler.target> <antlr4.version>4.10.1</antlr4.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>${quarkus.platform.group-id}</groupId> <artifactId>${quarkus.platform.artifact-id}</artifactId> <version>${quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Quarkus dependencies --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-reactive</artifactId> </dependency> <!-- ANTLR runtime --> <dependency> <groupId>org.antlr</groupId> <artifactId>antlr4-runtime</artifactId> <version>${antlr4.version}</version> </dependency> </dependencies> <build> <plugins> <!-- ANTLR plugin to generate parser code --> <plugin> <groupId>org.antlr</groupId> <artifactId>antlr4-maven-plugin</artifactId> <version>${antlr4.version}</version> <executions> <execution> <id>generate-antlr</id> <phase>generate-sources</phase> <goals> <goal>antlr4</goal> </goals> <configuration> <sourceDirectory>src/main/antlr4</sourceDirectory> <outputDirectory>${project.build.directory}/generated-sources/antlr4</outputDirectory> </configuration> </execution> </executions> </plugin> <!-- Quarkus Maven plugin --> <plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>${quarkus.platform.version}</version> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> </goals> </execution> </executions> </plugin> <!-- Compiler plugin to include generated sources --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.10.1</version> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> <generatedSourcesDirectory>${project.build.directory}/generated-sources/antlr4</generatedSourcesDirectory> </configuration> </plugin> </plugins> </build> </project>

Important note: In Quarkus dev mode, sometimes the ANTLR plugin doesn't run automatically unless you explicitly run mvn generate-sources before quarkus:dev. Lightrun


ExpressionEvaluator.java

package com.example; import org.antlr.v4.runtime.*; import org.antlr.v4.runtime.tree.*; public class ExpressionEvaluator { public int evaluate(String input) { CharStream cs = CharStreams.fromString(input); ExprLexer lexer = new ExprLexer(cs); CommonTokenStream tokens = new CommonTokenStream(lexer); ExprParser parser = new ExprParser(tokens); ParseTree tree = parser.prog(); EvalVisitor visitor = new EvalVisitor(); return visitor.visit(tree); } private static class EvalVisitor extends ExprBaseVisitor<Integer> { @Override public Integer visitMulDiv(ExprParser.MulDivContext ctx) { int left = visit(ctx.expr(0)); int right = visit(ctx.expr(1)); if (ctx.op.getText().equals("*")) { return left * right; } else { return left / right; } } @Override public Integer visitAddSub(ExprParser.AddSubContext ctx) { int left = visit(ctx.expr(0)); int right = visit(ctx.expr(1)); if (ctx.op.getText().equals("+")) { return left + right; } else { return left - right; } } @Override public Integer visitInt(ExprParser.IntContext ctx) { return Integer.parseInt(ctx.INT().getText()); } @Override public Integer visitParens(ExprParser.ParensContext ctx) { return visit(ctx.expr()); } } }

ExpressionResource.java

package com.example; import jakarta.ws.rs.*; import jakarta.ws.rs.core.MediaType; import jakarta.ws.rs.core.Response; @Path("/expr") public class ExpressionResource { ExpressionEvaluator evaluator = new ExpressionEvaluator(); @GET @Produces(MediaType.TEXT_PLAIN) public Response eval(@QueryParam("input") String input) { try { int result = evaluator.evaluate(input); return Response.ok(String.valueOf(result)).build(); } catch (Exception e) { return Response.status(Response.Status.BAD_REQUEST) .entity("Error: " + e.getMessage()) .build(); } } }

With this endpoint, you can call:

GET /expr?input=2*(3+4)

and get back 14.


Testing & Running

  • Run in dev mode:

    mvn clean generate-sources quarkus:dev
  • In production:

    mvn package java -jar target/quarkus-antlr-example-1.0.0-SNAPSHOT-runner.jar
  • Test via HTTP call, e.g.:

    curl "http://localhost:8080/expr?input=5+6*2"

Additional Tips & Caveats

  • ANTLR version mismatches can lead to runtime errors (ATN deserialization issues) when mixing versions. Make sure the version used to generate the parser matches the runtime version. GitHub+1

  • For native image builds, dynamic reflection used by ANTLR may need explicit registration or substitutions. Quarkus extensions sometimes provide built-in support.

  • If your grammar grows complex, consider organizing it modularly (lexer grammar, parser grammar, etc.).

  • Use custom error listeners to provide better syntax error messages.

  • For large grammars, incremental parsing or partial parsing might be needed, depending on performance.

ANTLR or others ?

 

What is ANTLR?

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator used to read, process, execute, or translate structured text or binary input. It’s widely used for building compilers, interpreters, DSLs, and expression parsers.


✅ Best Alternatives to ANTLR (Depending on Use Case)

ToolBest ForLanguage SupportNotes
ANTLRGeneral-purpose grammar parsingJava, C#, Python, JavaScript, Go, Swift, C++Powerful, with great grammar language and tooling
PEG.js / peggyLightweight JavaScript DSL parsersJavaScript, TypeScriptSimple PEG-based parsers for browsers and Node.js
JavaCCJava-based compiler constructionJavaMature but lower-level than ANTLR
Bison / YaccTraditional compiler constructionC / C++Used in system-level parsers, old but robust
ANTLR4 + QuarkusReactive microservices with expression/DSL parsingJava (Quarkus)Combine ANTLR and Quarkus for expression parsing in APIs
ParboiledParsing in Java/Kotlin/Scala without grammar filesJava, Kotlin, ScalaCode-based, elegant, but less expressive than ANTLR
OhmEducation & DSL designJavaScriptVisual + grammar editing; great for prototyping
ChevrotainPerformance-critical parsing in JSJavaScript, TypeScriptFastest JavaScript parser combinator toolkit
RagelFinite state machines + protocolsC, C++, JavaGreat for network protocols and binary formats
Tree-sitterIncremental parsing for editorsRust / CUsed in editors (VSCode, Neovim, Atom) for syntax parsing

🥇 Best Overall: ANTLR (for Most Use Cases)

  • 🛠️ Ideal for DSLs, expressions, configuration files, etc.

  • 💬 Excellent grammar language (EBNF-style)

  • 🔄 Generates parse trees and ASTs automatically

  • 🔌 Works across many languages (Java, Python, JS, etc.)

  • 👨‍🏫 Huge community, learning resources, and examples

  • 🔧 Great tooling: plugin support for IntelliJ, VSCode, Maven, Gradle

Use ANTLR if:

  • You want full control over grammar and syntax

  • You’re building interpreters or compilers

  • You want a battle-tested parser generator


🧠 If You Want a Code-Based Approach (Not Grammar Files)

Use Parboiled (Java/Kotlin) or Chevrotain (JS)

  • These let you build parsers in code, without writing grammar files

  • Easier to debug and integrate in modern applications

  • Great for embedding parsers in microservices (e.g., Quarkus/Vert.x)


⚡ If You Need Editor-Friendly or Live Parsing

Use Tree-sitter or Ohm

  • Useful for syntax highlighting, IDE features, or interactive language tools

  • Tree-sitter supports incremental parsing for live code editing

  • Ohm offers visual tools for learning and debugging grammars


🧪 Bonus: Combine ANTLR with Quarkus

If you’re building a Quarkus app and want to parse:

  • Math expressions

  • Query DSLs

  • Config languages

  • Custom APIs

👉 Use ANTLR to define your language and integrate it into a Quarkus REST or Kafka microservice for parsing or validating inputs.


🔚 Conclusion: Which is Best?

Use CaseBest Option
General DSLs / Expression ParsingANTLR
Java-only compiler toolsJavaCC
Interactive grammars / prototypingOhm
Performance in JSChevrotain
Editor tooling (VSCode/Neovim)Tree-sitter
Parsing in Java without grammar filesParboiled
Protocols / FSMsRagel

Using Schema Registry in Quarkus app

Using Schema Registry in Quarkus Application Development

Schema Registry is a critical component when working with Apache Kafka and Avro in Quarkus applications. It ensures that the data structure (schema) is consistent and compatible across producers and consumers. Here's a concise guide to integrating Schema Registry in a Quarkus application using a Student object.


1. Add Dependencies

Include the necessary dependencies in your pom.xml for Kafka, Avro, and Schema Registry support:

<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-reactive-messaging-kafka</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-apicurio-registry-avro</artifactId> </dependency>

2. Configure Application Properties

Set up the connection to your Schema Registry in the application.properties file:

# Kafka broker configuration kafka.bootstrap.servers=localhost:9092 # Schema Registry configuration mp.messaging.connector.smallrye-kafka.schema.registry.url=http://localhost:8081 # Avro serialization mp.messaging.outgoing.my-topic.value.serializer=io.apicurio.registry.utils.serde.AvroKafkaSerializer mp.messaging.incoming.my-topic.value.deserializer=io.apicurio.registry.utils.serde.AvroKafkaDeserializer

🔁 Replace http://localhost:8081 with the URL of your Schema Registry (e.g., Confluent or Apicurio).


3. Define Avro Schema

Create Avro schema files (e.g., student.avsc) and generate Java classes using the Avro Maven plugin:

Example: student.avsc

{ "namespace": "com.example.avro", "type": "record", "name": "Student", "fields": [ { "name": "name", "type": "string" }, { "name": "email", "type": "string" }, { "name": "grade", "type": "int" } ] }

Add the Avro Maven Plugin to pom.xml:

<plugin> <groupId>org.apache.avro</groupId> <artifactId>avro-maven-plugin</artifactId> <version>1.11.1</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>schema</goal> </goals> <configuration> <sourceDirectory>${project.basedir}/src/main/avro</sourceDirectory> <outputDirectory>${project.build.directory}/generated-sources/avro</outputDirectory> </configuration> </execution> </executions> </plugin>

Place your student.avsc file in src/main/avro.


4. Implement Kafka Producers and Consumers

Use Quarkus' reactive messaging to produce and consume messages with the Student Avro object.

✅ Producer Example:

import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.eclipse.microprofile.reactive.messaging.Channel; import org.eclipse.microprofile.reactive.messaging.Emitter; import com.example.avro.Student; @ApplicationScoped public class KafkaStudentProducer { @Inject @Channel("my-topic") Emitter<Student> emitter; public void send(Student student) { emitter.send(student); } }

✅ Consumer Example:

import org.eclipse.microprofile.reactive.messaging.Incoming; import com.example.avro.Student; public class KafkaStudentConsumer { @Incoming("my-topic") public void consume(Student student) { System.out.println("Received student: " + student); } }

Here, Student is the Java class generated from your Avro schema (student.avsc).


5. Test Schema Compatibility

Ensure your producer and consumer schemas are compatible. The Schema Registry (e.g., Apicurio or Confluent) will validate compatibility automatically at runtime.


✅ Summary

By following these steps, you can seamlessly integrate Schema Registry into your Quarkus application, enabling robust, versioned, and schema-compliant Kafka messaging with Avro.

Using the Student object example, you’ve seen:

  • How to configure Schema Registry

  • Define and generate Avro classes

  • Produce and consume Kafka messages with reactive messaging

  • Automatically validate schema compatibility at runtime


💡 Tip: If you're using Apicurio Registry, you can also explore features like artifact versioning, API-based registration, and schema evolution rules (BACKWARD, FORWARD, FULL compatibility).

Linux for DevOps (Beginners)

Linux for DevOps (Beginners) 1. Linux Fundamentals 1.1 What is Open Source? Open-source software is software whose source code is freely ava...