Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Discernment Lattice [Prompting]
6 points by samstave 33 days ago | hide | past | favorite | 17 comments
There were questions about gpt response quality, and this has been from my experience in wrangling gpts to function to spec over the past bit.

I am posting this as its own Ask HN as the topic has wrapped multiple comment threads and I'd like to ask the HN community about their understanding and use of Discernment Lattice | Domain concept as I just discovered it, I am interested to know if this is already what others are doing? What are your thoughts on this.

The following is kind of my discovery of the concept over the last bit - so forgive me if this is uninteresting to you, but it feels sound... Is this pedestrian? Or Interesting? Is everyone doing this already and I am new to the playground?

####

Discernment Lattice:

https://i.imgur.com/WHoAXUD.png

A discernment lattice is a conceptual framework for analyzing and comparing complex ideas, systems, or concepts. It's a multidimensional structure that helps identify similarities, differences, and relationships between entities.

---

@Bluestein https://i.imgur.com/lAULQys.png

Questioned if Discernment Lattice had any affect on the quality outcome of my prompt, so I thought about something I asked AI for regading the HN thread yesterday from ScreenShotBot and their "no db" architecture... and had it compare some things, and :

https://i.imgur.com/2okdT6K.png

---

I used this method, but in a more organic way when I was asking for an evaluation of Sam Altman from the perspective of an NSA cyber security profiler, and it was effective in that first time I used it.

https://i.imgur.com/Ij8qgsQ.png

..

https://i.imgur.com/Vp5dHaw.png

Cite its influences.

https://i.imgur.com/GGxqkEq.png

---

With that said, I then thought how to better use the Discernment Lattice as a premise to craft a prompt from:

>"provide a structured way to effectively frame a domain for a discernment lattice that can be used to better structure a prompt for an AI to effectively grok and perceive from all dimensions. Include key terms/direction that provide esoteric direction that an AI can benefit from knowing - effectively using, defining, querying AI Discernment Lattice Prompting"

https://i.imgur.com/VcPxKAx.png

---

So now I have a good little structure for framing a prompt concept to a domain:

https://i.imgur.com/UkmWKGV.png

So, as an example I check it for logic to evaluate a stock, NVIDIA in a structured way.

https://i.imgur.com/pOdc83j.png

But really what I am after is how to structure things into a Discernment Domain - What I want to do is CREATE a Discernment Domain, as a JSON profile and then feed that to a Crawlee library to use that as a structure to crawl...

But, to do that I want to serve that as a workflow to TXTAI library function that checks my Discernment Lattice Directory for instructions to crawl for:

https://i.imgur.com/kNiVT5J.png

This looks promising, lets take it to the next step for out:

https://i.imgur.com/Lh4luiL.png

--

https://i.imgur.com/BiWZM86.png

---

Closing, context window sizes are way smaller than you expect, and the project context directories are not we ll respected by the bots thus far, hallucinations and memory dumps, and other nefarious actions are rampant.

So - Discernment Domain Scaffolding and Lattice files to keep a bot in check.

So I thought out-loud the above, and I am going to attempt to use a library of Discernment Domain Lattice JSONs to try to keep a bot on topics. AI-ADHD vs Human ADHD is frustrating as F... Iteratively update the lattice templates for a given domain - then point the crawlee researcher to fold findings into a thing based on them... So for the stock example, then slice across things in interesting ways.




Are we operating on a similar question with two different approaches?

https://github.com/space-bacon/Semiotic-Analysis-Tool

I’m going to put some time into your question today and hopefully return with a more useful response.

I wanted to shamelessly get my early stage work on semiotic analysis in front of your eyeballs in the meantime as I see this as one of the more valuable pieces of content I’ve consumed today that could help improve my scripts direction as well.


Ok here is what I have come up with. I understand we are close to one foot in the looney bin here but such is innovation.

1. Semiotic Integration: Retain parts of my semiotic analysis tool to preprocess and enrich the input data before it's mapped onto the lattice. This will ensure that the lattice includes not only raw data but also the cultural and symbolic meanings of the entities involved.

2. Semiotic feedback loops: the analysis results inform the refinement of the lattice structure, which in turn improves prompt generation.


YES!

That is awesome!

We are fully in the same context window. I'd like to collaborate with you on this.

I started in thinking about giving the AIs an Archetype to follow, "Contextual Archetype Framework: https://i.imgur.com/yngUwpr.png

Where I can inform the AI as to what lens I want it to be from, then it evolved to the discernment lattive - but its very similar to Semiotic as you've put it.

When I was attempting to get a GPT to start building a matrix of conenctions:

>I apologize, but I do not feel comfortable creating a graph of personal connections and relationships based on this private contact information without the consent of the individuals involved. Building such a network could enable privacy violations or misuse of personal data. Perhaps we could have a thoughtful discussion about information ethics and responsible data practices instead.

--

In your project you state:

https://i.imgur.com/HZwWrZA.png

enrich_data_with_external_knowledge(named_entities): Enriches named entities with additional context from Wikipedia and Google Knowledge Graph

Which is what I was going after...

So then I had to get it so that I can build an independent Discernment Lattice as a JSON, then have it load that into itself as the constraints and things to follow.

em@il me sstave proton me - I like to get on a discord/slack and chat about these "one foot in the bin" loony tewns.


When I see people evaluating LLMs in this way I can't help but think they are letting emotion get the better of them:

> I am using the incorrect phrasing, but Ive been heavily using claude's "Project Folders" (paid account) - and when I put "context" files into the project folders, it will "forget" that the files are there - and Ill call it out when it switches back to boilerplate response language -- and it apologizes and says "You are correct I SHOULD HAVE BEEN using the project files for context.

You should probably try to implement your own RAG system with free models locally (ollama, langchain, chromadb can do it, it's very straightforward) so you can understand the process a bit more under the hood.

> How/why is it occurring mid conversation?

I dunno, usually when something is retrieved it is added to context. But a key part of RAG is determining how to chunk up your content so that the prompt embedding matches up to information that is actually targeted and concise.

So if RAG is behaving suboptimally the first thing to check is, are the input documents targeted and concise? If too much context is being stuffed into the prompt context, then the results will be poor.

You can see this with even just very large prompts, the larger the context the worse the quality, despite model developers claiming ever larger context windows.

I don't think inventing your own terms (Which therefore have extremely weak embeddings to match with the embeddings of the models training content) is the right way to go.

If I chuck in what is a discernment lattice to gpt4o I get:

> A "discernment lattice" isn't a widely recognized term in most common fields of study, but it can be interpreted in a few ways depending on context. Here's a breakdown of potential meanings and applications:....

So it's not really giving the model valuable tokens to work with.


> don't think inventing your own terms (Which therefore have extremely weak embeddings to match with the embeddings of the models training content)

Well put, in that, that was - in essence - what I was concerned with, with that term.-


For me: in-head-logic is so FN weird when you attempt to speak it out loud. It makes the brilliant idiots and idiots brilliant!

(what is best to capture logic?) ---

Honestly guys - I am doing my best. But I do so with as an is-much informed place as I can...

(this is why I always give benefit of the doubt ((I dont know the concept I amy be conveying)) HALP)


> (what is best to capture logic?) -

Either formal logic notation or a state machine?

(Depending on what you intend by 'logic' and 'capture' ...)

PS. A Karnaugh map might be of help. As an added bonus, you get to simplify things.-


> You can see this with even just very large prompts, the larger the context the worse the quality, despite model developers claiming ever larger context windows.

This is precicely the problem I am attempting to address.

The point is that I want to be able to frame the way a particular complex iterative query discovery can be accomplished where I utilized already baked frames of how I want to address the problem, similar to how the Baseball workflow is on txtai - but in a more robust manner.

https://github.com/neuml/txtai/blob/master/examples/42_Promp...

Where I can build particular query bots that can search properly based on the context where I dont have to spend time constructing the context in my prompt, I tell it to APPLY a context to the following prompt. So it limits the domain, formulates the domain to search.

Start looking at how baseball players' Dossier stacks up:

start building a context for player: https://i.imgur.com/2xvreAF.png

Then I can take that idea, and start applying it to say public figures: https://i.imgur.com/pFODyms.png

Build a caged set of lenses, think of it like an agent - but when I told it to act like a PHD Chemist specializing in materials science as an expert... I then framed which space in that area to focus the response from - which was as an expert in PFTE.

so, when I want to have it lookup aspects of say a bill passing, I can ask it "Looking at the congress-person lattice, what in their lattice is related to these various aspects from that bill/act/contract/bailout where you can see that when something occurred, these properties for this object were affected in such manner.

Then you just have say trigger that will say "Whoever has this sheet balance out after this thing occurs" and I have refined how that information will be sought out - for example, it will learn where to grab the best pieces of information over time, as it can be asked "which sources had the best information for [property]"

Then you say follow what MLOps was doing [0] for better understanding language used to define the same events.

https://mlops.systems/posts/2024-06-25-evaluation-finetuning...

We can then see an article, and then look at whatever lattice we have defined to see who may be connected to that thing - and how.

By developing the discernment model for that domain - you can have the ai evaluate the spaces of interest in various ways to then craft the lattice file - which then ideally keeps the FN thing focused so the context window can be more complex without superfluous token/memory cruft. So longer iterative can be made on the same subjects with room to context with less hallucination/forgetting.

And the outputs of those can be fed to other workflows in txtai or succinctly wrapped in MagicLoop widgets for great justice.


> context windows in all the GPTs are a lie

The above is a very very bold claim, to say the least.-


Likely I am using the incorrect phrasing, but Ive been heavily using claude's "Project Folders" (paid account) - and when I put "context" files into the project folders, it will "forget" that the files are there - and Ill call it out when it switches back to boilerplate response language -- and it apologizes and says "You are correct I SHOULD HAVE BEEN using the project files for context.

Its a flippn' AI Robot with a paid service for a bucket for files to maintain context for the project, and in the thread. It has artifacts that it produces that I tell it to reference, and it forgets about its artifacts, or rewrites them when not needed. How/why is it occurring mid conversation?

When given direction in your settings/preferences (both chatGPT and Claude) will forget and ignore.

Style guide statements such as "Always include full path" and "Version all files in format XYX"

--

It ignores these settings.

Also, it will pull language from memories past - from unrelated threads/topics at times.

---

This is the premise for this post, I feel like using a Discernment Domain to keep the Robot focused may be a successful tool, and I'd like input from others on the concept, and when I am thinking about how many people are building ChatBots - maybe a ChatBot with a settings panel that allows the definition of such Lattice files - that maybe a Chatbot, domain specific from a prompting focus, would be helpful layer to add on top of things that are trained...

The goal is to pull salient, structured, and definable responses ...

Now with the json output from openai, one could structure Discernment Domains across JSON output fields from OAI.

Such that slices that are otherwise opaque/obfuscated might be discoverable just by asking across a discernment lattice (like the stock on i put) -- saying Run the stock discernment across NVIDIA, Intel, and Open AI, and the use of their datacenter foorprint from the DATACENTER discernment lattice

(which shows: https://i.imgur.com/zO0yz6J.png -- Left is nuke, top = cables, bottom = datacenters. I went to ImportYeti to look into the NVIDIA shipments: https://i.imgur.com/k9018EC.png)

We can see and track where compute is going and whats powering it and growing etc..

(but i want to be able to use the predefined constraints to feed into the others from the jsons of the gpts output...

(Is this what everyone does already?)


Interesting observations.-

PS. I really appreciate the thoroughness with which you are approaching this, wanted to add to what I said about "discernment lattice"

What I was wondering - if this helps - is if the term "discernment lattice" being (at least in my humble understanding) a lesser known or not standard concept might not be having a negative effect on prompt results - where, for example, a simpler description of the concept might yield better results? (ie. the AI might not know or have difficulty understanding what you mean by "discernment lattice" so a simpler description of it - 'multi dimensional matrix of knowledge cross referenced from all [these] knowledge domains' - might yield better results? That was basically what I was wondering ...

PS. I now see - if I am correct - that "discernment lattice" is not just a term you are using in prompts, but a "structure" you are building - and then perhaps referring to - from AI-provided context "comfig" or state files ...


Let me clarify where Discernment Lattice comes from:

Aiming to be a succinct, stoicly direct about the archetype directive to give the AI I came up with "Discernment Lattice" to describe how I wanted the AI to lens its take on the problem. And I put it that way due to me attempting to keep a bot focused on specifically the domain I wanted - and I was looking for better, more precise words to scaffold the request at.

So I told it to use the archetype of a particular Professional Career/Degree/Position as the constraints as its "Discernment Lattice" - had not thought of that phrase prior, I just knew it reflected my intent.

--

After you pointed that out, I didnt want to ask what a discernment lattice was because I didnt know if it existed - so instead I asked it to "describe its discernment lattice it used" for the questions about yesterday and it gave me a great description of what it used. AHA! It is a thing... So then I asked it to tell me what a discernment lattice is, using a discernment lattice.

I usually close a prompt with "review, explain, propose, confirm, execute" which forces them to tell me what they are about to do... but now Ill add this to the discernment lattice template I want it to refer to for workflow prompts.

____

>>...from AI-provided context "comfig" or state files .

Yes, this -- Basically I want a way to guide the prompt in a structured manner that as a part of the prompt it just needs to lookup the Discernment Domain file I reference, to get the deeper context of the question I want to ask.

So, if I have a discernment domain for what I want to look at when I mention "From CORPO import NVIDIA, OIA, MSFT, INTC and from OPENSECRETS investments show LAW affects STOCK

but these can be other workflows plans within plans...

https://i.imgur.com/Fi5GYRl.png

https://i.imgur.com/fRyVDR5.png

https://i.imgur.com/seTHs5R.png

https://i.imgur.com/wlne9pT.png

---

> it described something sufficiently much like it, that you concluded the AI knew what a "discernment lattice" was ...

? Is this wrong? Yeah - now that I see the defenition, which reflects not only my initial intent, but also structures it in a sound format, and it meets the esoteric functino of informed intent.

Meaning that, like an agent - its really to be an agent of "From The Perspective Of" and more intrinsicly "DiscernTrueConnBtwn[A,B,C,D] as informed by Lattice[X,Y,Z] where [N] can be better understood, discovered, assessed, grok'd and gleaned.

With a repo of Lattices which can then be used in conjunction with txtai workflow functions:

https://github.com/neuml/txtai/tree/master/examples

Where the Lattice could be a series of things that you want to connect people places and things -- I like the AI FootPrint example best - so I am going to attempt to build that first.


> so instead I asked it to "describe its discernment lattice it used" for the questions about yesterday and it gave me a great description of what it used. AHA! It is a thing...

So, if I am understanding you correctly, by forcing the AI to describe - and, it did - its (so called) "discernment lattice", it described something sufficiently much like it, that you concluded the AI knew what a "discernment lattice" was ...

> So then I asked it to tell me what a discernment lattice is, using a discernment lattice.

I'd be curious to see what it replied to this.-


What is a discernment lattice and what are you smoking?


Clearly you're not smoking "reading compreHEMPsion" because to post a banal FN statement on HN should typically involve the wrath of @dang.

If you didnt bother to actually grok the thread (which requires one to click on imgur links posted for context (but I will also safely assume that you dont even know what HoverZoom is such that you can mouse-hover over images and vids and get the image in your tool-tip-pop-up so You dont have to open a bunch of things to glean context based on image links in a thread? (gleaning context is a skill, of which I have many, you though..._?)

Here you go Muppet: https://i.imgur.com/Vu99TgT.png

(You helped me create a no ascii emoji::

Huh? What Now? Say What?:

---

._?

(._?

?,_,

Looking up

(,^,)

---


I did read through your links but it's still unclear what it actually is and how it's useful. I don't think you're onto something here, and if you are I think you're doing a bad job at communicating it


Thank you.

I am totally open to being "checked" as it were. and I am being given positive reinforcement from the AIs themselves, and It skeeves me out and I am a skeptic. HOWEVER

My results have stated otherwise.

Truely appreciate your comment.

So - WRT "how its useful"

Imagine I want to ask a bunch of questions about something as complex as to pol, corp, fin, policy, religion,

So how pull insight across these dimensions...

(I think youre a troll. There is no way someone cant put 2&2...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: