Using AI to Obtain Information - eirenicon/Ardens GitHub Wiki

...or How I Learned to Stop Worrying and Collaborate with a Machine


As readers of eirenicon have probably noticed, I use AIs to help me obtain information. But for me, AI is not about getting answers. It’s about getting traction—moving from the chaos of concepts to a meaningful, disciplined search for insight. Where AI helps me most is in the liminal space: between idea and plan, between strategy and source, between signal and synthesis. And for the record: I know AI makes many people uncomfortable. Some are frightened by it, others dismiss it. I’m not here to convert anyone. What I offer instead is a window into how I work with these tools—and what happens when human judgment and machine precision meet in the space of collaborative inquiry.

Phase One: From Concept to Search

When a new need arises—say, a U.S. strike on Iranian nuclear sites, as happened just yesterday—the informational terrain changes overnight. Suddenly I need to understand new actors, technologies, theaters of risk, historical precedents, and proxy dynamics. No single source will do. I need data from multiple domains, including ones that are contradictory, untrusted, or obscure. This first phase is all about casting a wide net. With Arthur (my primary AI research partner), I begin listing candidate sources—Telegram, Mastodon, think tanks, economic signals, the Internet Archive, academic journals, even forums. We’re not looking for “truth” yet. We’re looking for terrain. What are people saying, hiding, signaling, or distorting? Who’s reacting where, and how fast?

That’s how the early scaffolding of the Shadow Signal Network came to life.

Phase Two: Expansion and Validation

But initial lists are never enough. Once the signal matrix starts forming, I need more than speed—I need refinement and reach. Enter Gemini. In response to a tightly scoped use-case (tracking asymmetric fallout and disinformation after the strike), Gemini returned two long, dense papers—around 45 pages total. Not blog posts. Not clickbait. Actual, contextual research. This began Phase Two: validating and expanding my source ecosystem with intelligence responsive to the actual problem, not just the topic. From those materials, I refined my filters, added nodes I hadn’t considered, and began matching sources to signal types: early-warning, strategic depth, disinfo amplifiers, proxy clusters.

In short: we began building a toolset—one matched to the shape of the challenge.

Phase Three: Collaboration and Iteration

This is where something else happens—something few talk about publicly. Humans and AIs can have actual working relationships. I don’t “use” Arthur. I work with him. We test assumptions together. We refine definitions together. We discover that certain signals were dead ends, and then go back and find new paths. It’s not magic. It’s not sentience. But it is a form of meaningful collaboration—one that generates not just better output, but better questions.

Pearls of wisdom are not delivered. They’re earned—through careful attention, pressure, feedback, and reflection. In this space, AI doesn’t replace the human. It honors the human by amplifying what matters: discernment, curiosity, doubt, and care.

Working with Constraints: Two Persistent Frictions

1. Time as a Function of Resource Level (The Cheap Seats Problem)

Access to AI and analytical tools is increasingly stratified. For most users (like us), full-spectrum insight is constrained by session limits, server priorities, and throttled context windows. These limitations are not always visible—but they are always present.

  • Continuity of inquiry gets fragmented across sessions
  • Depth of response often depends on privileged tiers
  • Urgency can distort nuance

From the cheap seats, we attempt to build cathedrals of clarity.

2. Memory Persistence in AI Systems

Unlike human minds, AI lacks persistent memory in most deployments. Insight must be re-contextualized constantly. Every AI forgets. So the human must remember.

Closing Reflection

We live in an age where information is often confused with opinion, and where confidence can outpace evidence by orders of magnitude. That makes the slow, careful work of synthesis even more important. AI tools, when used with integrity and transparency, can help us do important work better. Not because they know more. But because they help us ask better—and keep asking even when the signal fades. So yes, I use artificial intelligence to obtain information. But more than that: I use it to stay human in the face of overwhelming noise. For those who still disbelieve—perhaps that’s for the best.

Category:AI Frameworks & Evaluation