DEV Community

Cover image for Kalynt: An Open-Core AI IDE with Offline LLMs , P2P Collaboration and much more...
Hermes Lekkas
Hermes Lekkas

Posted on

Kalynt: An Open-Core AI IDE with Offline LLMs , P2P Collaboration and much more...

I'm Hermes, 18 years old from Greece. For the last month, I've been
building Kalynt – a privacy-first AI IDE that runs entirely offline with real-time P2P
collaboration. It's now in v1.0-beta, and I want to share what I learned.

The Problem I Wanted to Solve

I love VS Code and Cursor. They're powerful. But they both assume the same model:
send your code to the cloud for AI analysis.

As someone who cares about privacy, that felt wrong on multiple levels:

  • Cloud dependency: Your LLM calls are logged, potentially trained on, always traceable.
  • Single-user design: Neither is built for teams from the ground up.
  • Server reliance: "Live Share" and collaboration features rely on relay servers.

I wanted something different. So I built it.

What is Kalynt?

Kalynt is an IDE where:

  • AI runs locally – via node-llama-cpp. No internet required.
  • Collaboration is P2P – CRDTs + WebRTC for real-time sync without servers.
  • It's transparent – all safety-critical code is open-source (AGPL-3.0).
  • It works on weak hardware – built and tested on an 8GB Lenovo laptop.

The Technical Deep Dive

Local AI with AIME

Most developers want to run LLMs locally but think "that requires a beefy GPU or cloud subscription."

AIME (Artificial Intelligence Memory Engine) is my answer. It's a context management layer
that lets agents run efficiently even on limited hardware by:

  • Smart context windowing
  • Efficient token caching
  • Local model inference via node-llama-cpp

Result: You can run Mistral or Llama on a potato and get real work done.

P2P Sync with CRDTs

Collaboration without servers is hard. Most tools gave up and built it around a central
relay (Figma, Notion, VS Code Live Share).

I chose CRDTs (Conflict-free Replicated Data Types) via yjs:

  • Every change is timestamped and order-independent
  • Peers sync directly via WebRTC
  • No central authority = no server required
  • Optional end-to-end encryption

The architecture:
@kalynt/crdt → conflict-free state
@kalynt/networking → WebRTC signaling + peer management
@kalynt/shared → common types

Open-Core for Transparency

The core (editor, sync, code execution, filesystem isolation) is 100% AGPL-3.0.
You can audit every security boundary.

Proprietary modules (advanced agents, hardware optimization) are closed-source but still visible to users :

  • Run entirely locally
  • Heavily obfuscated in binaries
  • Not required for the core IDE

How I Built It

Timeline: 1 month
Hardware: 8GB Lenovo laptop (no upgrades)
Code: ~44k lines of TypeScript
Stack: Electron + React + Turbo monorepo + yjs + node-llama-cpp

Process:

  1. I designed the architecture (security model, P2P wiring, agent capabilities)
  2. I used AI models (Claude, Gemini, GPT) to help with implementation
  3. I reviewed, tested, and integrated everything
  4. Security scanning via SonarQube + Snyk

This is how modern solo development should work: humans do architecture and judgment,
AI handles implementation grunt work.

What I Learned

1. Shipping beats perfect

I could have spent another month polishing. Instead, I shipped v1.0-beta and got real
feedback. That's worth more than perceived perfection.

2. Open-core requires transparency

If you're going to close-source parts, be extremely clear about what and why.
I documented SECURITY.md, OBFUSCATION.md, and
CONTRIBUTING.md to show I'm not hiding anything
nefarious.

3. WebRTC is powerful but gnarly

P2P sync is genuinely hard. CRDTs solve the algorithmic problem, but signaling,
NAT traversal, and peer discovery are where you lose hours.

4. Privacy-first is a feature, not a checkbox

It's not "encryption support added." It's "the system is designed so that
centralized storage is optional, not default."

Try It

GitHub: https://github.com/Hermes-Lekkas/Kalynt

Download installers: https://github.com/Hermes-Lekkas/Kalynt/releases

Or build from source:


bash
git clone https://github.com/Hermes-Lekkas/Kalynt.git
cd Kalynt
npm install
npm run dev
Enter fullscreen mode Exit fullscreen mode

Top comments (3)

Collapse
 
aniruddhaadak profile image
ANIRUDDHA ADAK

Impressive!

Collapse
 
peacebinflow profile image
PEACEBINFLOW

This is genuinely impressive — especially at 18.

What stands out to me isn’t “offline LLMs” or “P2P collab” in isolation, it’s that you designed the system around privacy first, instead of bolting it on later. Most tools say “we care about privacy” and then quietly route everything through a server anyway.

The CRDT + WebRTC choice is the right kind of pain to take on. Hard, unglamorous, but foundational. Same with running models locally on weak hardware — that constraint forces real engineering instead of hand-waving.

Also appreciate the honesty around open-core. Being explicit about what’s closed, why it’s closed, and how it behaves locally is way better than pretending everything is magically open.

Big respect for shipping in a month and putting it in people’s hands. That feedback loop is where the real learning happens.

Curious where you want to take this next:

deeper agent workflows?

multi-repo / mono-repo awareness?

or tighter memory + context control for local models?

Excited to see where Kalynt goes 👀

Collapse
 
hermes_lekkas_ebf9fb25130 profile image
Hermes Lekkas

Thank you for your response the next stage is to make the small LLMs work better as agents because currently due to their small context and small Parameters it is quite hard to make them understand how to behave as agents . For example trying to read , write and run a file . Kalynt has a long way to go until it is something stable that's why I released it as a beta v1 and also looking for contributors to help me maintain and develop it's huge 44 thousand lines of codebase .