Rendered at 07:13:52 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
redm 8 hours ago [-]
This is pretty cool!
What's holding me back from AI repos and agents isn't running it locally though. Its the lack of granular control. I'm not even sure what I want. I certainly don't want to approve every request, but the idea of large amounts of personal data being accessible, unchecked, to an AI is concerning.
I think perhaps an agent that focuses just on security, that learns about your personal preferences, is what might be needed.
kenforthewin 7 hours ago [-]
Thanks for taking a look!
Agreed regarding the privacy/security hesitations. Running the models locally with ollama is an option, but of course there's the hardware requirements and limitations of open source models to contend with. ultimately it's a balance between privacy and ease of use, and I'm not sure that there's a good one-size-fits-all for that balance.
tedmiston 2 hours ago [-]
is your idea of granular control (roughly) a group of agents in separate containers writing back to their own designated store each sufficient, or more control than that?
visarga 40 minutes ago [-]
I did something similar, markdown and code agents for memory, multiple feeds for intake, also my own browsing and claude cli messages get indexed.
aavci 3 hours ago [-]
Does anybody mind explaining how the web of articles in the first image helps the writer?
kenforthewin 2 hours ago [-]
The honest answer is it doesn't help a ton, at least not in its current form. It's fun to look at, and occasionally I'll see some interesting semantic connections between articles - but by far the more useful tools here are the wiki generation, auto-tagging, and chat/MCP features. The graph view definitely needs more love - if anyone has thoughts on how to make it more useful, I'd love to hear them.
eucyclos 58 minutes ago [-]
My first thought is that this would be a great place to find new topics of interest at the "overlap" regions.
andreygrehov 8 hours ago [-]
Great work, but your macOS build cannot be opened. You need to sign the app through Apple Developer Program.
kenforthewin 8 hours ago [-]
Thanks! The project is still early stages, haven't had a chance to get the app signing set up - right now the easiest way to get started is using the web interface via docker compose.
actionfromafar 8 hours ago [-]
System Settings > Privacy & Security, scroll down to the "Security" section, and click Open Anyway
pwchiefy 4 hours ago [-]
I think tools like this will get really popular once more non-technical users get comfortable with CLI-based agentic tools. What's your go-to agent harness when using this? Will check it out!
ukuina 8 hours ago [-]
Seems like a LogSeq/Roam/Obsidian alternative?
kenforthewin 8 hours ago [-]
For sure. The idea here (or at least how I've been using it) is to use Atomic as catchall place to put personal notes, interesting articles, research ideas .. pretty much anything, and Atomic will handle the categorization and knowledge synthesis. For example, I have a knowledge base that uses RSS to sync top Hacker News articles and I'll occasionally generate new wiki-style articles which summarize and synthesize the articles based on top-level categories (AI, hardware, philosophy, you name it).
maurice-nomad 5 hours ago [-]
looks fine
leontloveless 9 hours ago [-]
[dead]
websitedfan 7 hours ago [-]
[dead]
bamwor 1 hours ago [-]
Clean approach to connecting knowledge semantically. The self-hosted angle is smart — data ownership matters especially for personal knowledge. How are you handling the semantic matching under the hood?
What's holding me back from AI repos and agents isn't running it locally though. Its the lack of granular control. I'm not even sure what I want. I certainly don't want to approve every request, but the idea of large amounts of personal data being accessible, unchecked, to an AI is concerning.
I think perhaps an agent that focuses just on security, that learns about your personal preferences, is what might be needed.
Agreed regarding the privacy/security hesitations. Running the models locally with ollama is an option, but of course there's the hardware requirements and limitations of open source models to contend with. ultimately it's a balance between privacy and ease of use, and I'm not sure that there's a good one-size-fits-all for that balance.