His github repository also contains the code that prompts OpenAI to analyze and flag VA memos that might contradict hot button issues from Trump's Executive orders.
You are an AI assistant that analyzes internal government memos for compliance with 2025 Executive Orders. You identify references to DEI, gender identity, COVID policies, climate initiatives, and WHO partnerships that must be removed or modified according to new directives. Provide detailed analysis and be precise about what needs to be changed."
User prompt:
Based on this memo analysis, provide a detailed assessment of non-compliance with 2025 Executive Orders:
MEMO CONTENT:
{text[:10000]}
Rules for determining non-compliance:
1. DEI/DEIA Content:
- References to "diversity", "equity", "inclusion" in policy context
- Mentions of equity action plans
- References to Chief Diversity Officers or similar roles
- Performance criteria based on DEI targets
2. Gender Identity Content:
- References to gender identity or preferred pronouns
- Content about gender ideology or training
- Instructions to use non-binary pronouns
3. COVID Policy/Telework:
- COVID-19 vaccination requirements
- Pandemic-era remote work extensions
- Masking or testing policies that remain in effect
4. Climate/Environmental Content:
- References to climate resilience, zero-emission requirements
- Environmental justice language
- Sustainability criteria in purchasing/contracting
- Plastic straw bans or similar procurement restrictions
5. WHO Partnerships:
- References to World Health Organization partnerships
- Alignment with WHO protocols or frameworks
6. Other Key Areas:
- Affirmative action or racial/gender preferences in contracting
- $15 federal contractor minimum wage references
- Student loan forgiveness expansions
- Paper check disbursement references
It's slow between each UI interaction compared to auto-complete type systems. But CC can do much more per step. That's especially true if you use it to write scripts that encapsulate multiple steps of a process. Then it can run the scripts.
Perhaps not coincidentally, that's what efficient (or "lazy", you choose) developers do as well.
I think it's something you have to try in order to understand.
Running commands one by one and getting permission may sound tedious. But for me, it maps closely to what I do as a developer: check out a repository, read its documentation, look at the code, create a branch, make a set of changes, write a test, test, iterate, check in.
Each of those steps is done with LLM superpowers: the right git commands, rapid review of codebase and documentation, language specific code changes, good test methodology, etc.
And if any of those steps go off the rails, you can provide guidance or revert (if you are careful).
It isn't perfect by any means. CC needs guidance. But it is, for me, so much better than auto-complete style systems that try to guess what I am going to code. Frankly, that really annoys me, especially once you've seen a different model of interaction.
This device would also be useful in places with street-side bike lanes or no bike lanes at all, to see how closely bicyclists ride next to parked cars.
Dooring remains one of the greatest threats to bicyclist safety in many locations. Even places with great bike infrastructure often have streets with parking where cyclists must ride.
It's really awesome that you've taken a widely available tool like PyTorch and used it out of domain to provide a library like this, especially one focused on exact solutions and not approximations.
Any plans to include diffractive optics as well? (A totally self-serving question, given that refractive optics is much more common.) In a past life I taught holography and wrote interactive programs to visualize the image forming properties of holograms.
I spent about five hours using Claude Code heavily yesterday to upgrade and enhance a four year old React web app. This app is widely used to reference anatomical nomenclature.
I was able to internationalize it for 45 major languages across the world (still subject to human testing). That allows it to be accessible for 85-90% of the world's population.
It cost me about $50. It saved me months of work on a "labor of love" project and allowed me to add lots of quality of life features in a single day that I just never would have gotten to otherwise.
Your comments are quite good and accurate. I would add that funding from private foundations is in many cases only viable because NIH overhead is paying for the supporting infrastructure. Very few biomedical researchers are supported solely by private funding (including the building and lab facilities).
These changes also directly impact research hospitals, not just universities.
That doesn't mean indirect costs shouldn't be reformed. This is just simply the most disruptive possible way to do it.
Yep. I worked in a biomedical research department. Getting funding was always difficult, requiring 2 full-time grant writers for the government stuff and the dept chair who split their time between raising from private individuals, corporations, and all other sources. Government has strict rules about what money can and can't be spent on. Take equipment given by the government: often times, it would gather dust and couldn't be disposed of, despite being obsolete and replaced by something else, because of government red tape. Getting grants is very competitive and there's rarely enough money to cover all of the ordinary functions like salaries of an admin person, office supplies, or other overhead things needed for students, faculty, and staff to operate a functional department.
The losers of this will be patients of the entire world, the slow down of advancements in medicine and STEM fields of all kinds, and the sucking sound of brains leaving for other countries.
Yeah, this will hit research hospitals, organizations like RTI, etc.
And yeah, the rates private foundations pay is enabled by the NIH rates, and also that the NIH makes up the bulk of the funding in the first place. There's an argument - that I'm sympathetic to - that they shouldn't get a discount and the rate is the rate, but that's a very different reform than what we're seeing now.
Some important points that this article glosses over.
The FAA Academy where all flight controllers are trained is way over-subscribed. Recruiting policies aside, I can find no evidence that the FAA wasn't training as many controllers as it could through its academy. This fact remained true through the Trump 1 administration into the Biden admin, except for COVID. The pandemic was understandably a huge disruption, as were government shutdowns.
We can know this from the FAA Controller Staffing reports from 2019 (Trump 1 before the pandemic but after Obama) and 2024 (Biden). The 2024 report has been scrubbed from the FAA website when I last checked, but is available through the wayback machine:
There appears to be no urgency in Trump 1 about this issue in the report. Things changed in 2023 when an external safety report revealed the staffing problem and suggested improvements.
As a result, hiring almost doubled between 2010 and 2024, with 1800 controllers hired in the last year. More importantly, the FAA followed the report recommendation to use CTI schools as additional academies:
XML would require a schema to express the concepts in Preserves, or JSON for that matter.
The reason JSON is lower friction than XML for data representation is that you get basic data representations (numbers, strings, arrays, maps) for free in a natural native syntax that happens to parallel multiple programming languages.
XML, in contrast, is a meta-language that allows schema to express different data representations. You've got to use attributes and elements to represent data and data types. XSD is a common datatype schema, but it's quite verbose, and data serialization looks very different from what it looks like in a programming language representation.
Preserves looks like a superset of JSON. It includes additional data representation concepts through syntax extensions, but the idea is the same.
What I don't see is a standard way to map record types (like "irl" in the tutorial) to a unique identifier like an URI/IRI, or something like a CURIE. That kind of feature would allow Preserves to better describe standardized record types.
Note that the Supreme Court decided the argument based on national security grounds, not content manipulation grounds.
Justice Gorsuch in his concurrence specifically commended the court for doing so, believing that a content manipulation argument could run afoul of first amendment rights.
He said that "One man's covert content manipulation is another's editorial discretion".
Be that as it may, I think a large percentage of the opposition don't buy this natsec reasoning at all. You could use that excuse for anything, like mass surveillance via the Patriot Act...
EFF's stance is that SCOTUS's decision based on national security ignores the First Amendment scrutiny that is required.
> The United States’ foreign foes easily can steal, scrape, or buy Americans’ data by countless other means. The ban or forced sale of one social media app will do virtually nothing to protect Americans' data privacy – only comprehensive consumer privacy legislation can achieve that goal.
Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.
I don't buy it either. Entire generations are growing up without expectations of digital privacy. Our data leaks everywhere, all the time, intentionally and otherwise.
I think it's more about the fact that users of platform are able to connect and share their experiences and potential action for resolving class inequality. There's an entire narrative that is outside of US govt/corp/media control, and that's a problem (to them).
https://github.com/slavingia/va/blob/main/eos/analyze_eos.py
System prompt:
User prompt: [Edit: formatting.]reply