Hacker News new | past | comments | ask | show | jobs | submit | mezentius's comments login

A simple technique not listed here for drawing contour edges:

1) Create an array storing all unique edges of the faces (each edge being composed of a vertex pair V0, V1), as well as the two normals of the faces joined by that edge (N0 and N1).

2) For each edge, after transformation into view space: draw the edge if sign(dot(V0, N0)) != sign(dot(V0, N1)).


I work in Hollywood. Like USC’s Annenberg Inclusion Initiative, the UCLA Center for Scholars & Storytellers is just another example of a political advocacy group using a university as cover for ideological messaging, under the auspices of “research.”

Note that both Annenberg and S&S rarely, if ever, publish the raw data used to draw these suspiciously blunt conclusions about media representation (in this case, about supposedly “dated and unrelatable romantic tropes”); they merely crank out glossy press releases designed to be regurgitated by overworked trade-magazine bloggers. Underneath these are lousy “self-reporting” surveys with data carefully massaged to fulfill an intended purpose.

There’s a lot of snake oil peddled in town, but this stuff irritates me the most, as it is often reported uncritically and discussed without any reflection on why these “studies” are conducted, or who funds them.


A later version is available here (along with the compiler itself): https://winworldpc.com/product/metaware-high-c-cpp/33x


I do some light hacking in iDOS for fun as well. Might as well go straight to Borland C++ 3.1 [0] (the "professional" version) over Turbo C++ these days.

When you eventually hit the Great Wall of 16-bit real mode, I recommend switching to OpenWatcom [1], aka "the compiler that built Doom," which comes with something closer (relatively speaking) to a modern toolchain. DJGPP [2] has been recommended as well, although I found it more trouble to set up.

[0] https://winworldpc.com/product/borland-c/30

[1] https://github.com/open-watcom/open-watcom-v2

[2] https://www.delorie.com/djgpp/


It’s great. I’ve been running iDOS regularly on a 3rd-gen iPad Pro, at 26800 cycles (roughly a 486DX4 100Mhz). I can run every DOS game well supported by DOSBox, write in WordPerfect 6.2, install and run Windows 3.1 at full speed, and compile and debug code via Borland C++ 3.1/4.6, or OpenWatcom 2.0. The file system is easily accessible through the Files app, so I can transfer files in and out without tedious mounting of images.

My only (and very minor) complaint is that it relies on the ancient DOSBox-0.74-3, rather than one of the forks with broader support and better emulation of exotic hardware, like DOSBox-X.

With regard to UTM SE, I’ve had mixed results as well. For best results, I recommend installing Windows 95, which was very performant. (Otherwise, keep in mind that the SE stands for ‘Slow Edition’ — really.)


Absolutely. Chandler was in fact employed by an oil company before he tapped a richer vein (sorry) with his fiction.

Incidentally, while re-reading Chandler's "The Lady in the Lake," I realized that the scene in which Marlowe meets his employer, dozing off in a "high-backed chair" in his club's library, was a description of the library at the Los Angeles Athletic Club. Today, if you're a member (or know one), you can pop up to the third floor, still preserved in all of its interwar ersatz-Gilded-Age splendor, and doze off in one of those distinctive high-backed chairs after a drink and schvitz.

It turns out that Chandler had worked across the street when employed by Dabney Oil, and visited the L.A. Athletic Club frequently.


Ah you’re right, I’ve read a few biographies of Chandler but somehow totally forgot he worked for an oil company. IIRC it was a job he got through his father in law.

Great trivia about the club - will definitely have to visit that if I make it to LA again.


I fail to see how this is, as one specialist puts it, a “profound ethical dilemma," and not simply a temporary and embarrassing misalignment of resources. If you can prevent people from dying—and enable them to live meaningful, sentient lives despite being tethered to a device—then the solution is clear: scale up production while making the devices smaller and cheaper, and in the meantime seek out alternative long-term facilities for palliative care to avoid occupying hospital beds.

The fact that the article frames the problem as "we have this fear of letting people die"—instead of a difficult but solvable problem of research, economy, logistics—seems to me emblematic of a certain dead-end, anti-growth mindset that pervades much of supposedly humanistic writing from the NYer.

So what if this is "a bridge to nowhere?" So is life! And in the end, we are all, in our own ways, waiting for time to run out, tethered to something immovable.


> scale up production while making the devices smaller and cheaper, and in the meantime seek out alternative long-term facilities for palliative care to avoid occupying hospital beds.

I feel like you're just hand-waving away the issue. If they could move them out of the ICU they would have, the issue is they require constant care while on the ECMO machine.

Additionally, while the "smaller, cheaper, no care required" devices may appear in the future (the article talks about this very thing), they're not here _right now_. There's currently a limited number of machines and people who can maintain them in the hospital, and hence an immediate problem that they have to deal with when there's more people who can benefit from them than machines they have.


I'm not hand-waving the issue; as I said, it's clearly a very difficult problem. But it is not an ethical dilemma; it is a resource-allocation problem. In the United States, we are historically good at solving those, when properly motivated.

Why can't more machines be made? Why are there a limited number of people who can maintain them and perform care on a long-term basis? These are questions that lie downstream of many long-standing institutional problems with the practice of medicine in the US, and framing them as ethical "maybe-some-people-should-just-die" questions is missing the broader story.


> But it is not an ethical dilemma; it is a resource-allocation problem.

Sure, it's a resource-allocation problem, but _right now_ it is an ethical dilemma. None of what you're suggesting will suddenly make the problem gone in a year, hence why I called it hand-waving.

> Why are there a limited number of people who can maintain them and perform care on a long-term basis?

You're just asking the question "why aren't there more people working in the ICU?". Somehow I don't think this is a problem that would be solved in a year if someone just 'finally sat down and worked on it'.

As the the article points out, people are _already_ working on the issues you came up with, it just turns out they're actually hard problems to solve.


Well, no, the article doesn't address any of those issues. In the case of the teenager apparently allowed to die through refusal-of-service, the logic of the situation as presented by the hospital—either the child dies, or others die—is not interrogated with regard to possible alternatives. Vastly increasing ICU capacity nation-wide in a single year might be tough (although I don't agree with your framing of its impossibility) but why could it not be done in this particular, local case? It seems obvious that hospitals have a strong incentives to present cost-minimization as "profound ethical dilemmas."

The future work briefly touched on at the end—focusing on organ transplants and miniaturization, and framed by a professional arguing that "the overarching problem here is that we have this fear of letting people die"—does not cover any of the obvious but difficult ways of dealing with individual situations in the near-to-medium term (bottlenecks in production, personnel, etc.).


It absolutely boggles my mind that this is such a controversial viewpoint. We're talking about a machine that can keep people not just alive, but awake, talking, and riding an exercise bike, without working lungs or a heart. It's an insane, miraculous treatment, and extremely strong evidence that death is something we can conquer. And people come along and just downvote comments like yours, with no explanation at all, because it's so deeply engrained in their brains they they and everyone they love simply must die, and everyone who believes otherwise must be naive and stupid. Because death is this magical, spiritual, special problem unlike any other problem humanity has ever faced: the one thing we will never be able to solve?

It absolutely is a dead-end, anti-growth mindset, and I don't understand it. Why is everyone so in love with death?


It’s not a particularly fair accounting of the framing of the article, which also profiles people who are working precisely on making the technology more practical and portable and ends on a hopeful note. For the time being it’s a high maintenance way to keep people alive, though, so the ethical dilemmas of resource allocation are real.


This framing is a false dilemma. All resource allocation decisions are also ethical choices.


No—not necessarily. In the case of, say, a hypothetical plane crash in a desert, with two thirsty survivors and one cup of water, resource allocation may also be a profound ethical dilemma. The article’s author (and hospital administrators) encourage us to see the situation in this light.

But in our vastly wealthy, highly-productive 21st-century society, this situation need not be not zero-sum; production can be scaled up on demand, priorities can be shifted, costs can be absorbed. What’s constrained in this case is not supply of the life-saving resource, but political and economic will over inertia.

In this case, calling this an “ethical dilemma” stretches both the definitions of “ethics” and “dilemma” to the breaking point. There is a clear right answer here—but the insistence on choosing the wrong answer over and over, to keep costs down and avoid long-delayed reforms to medical staffing and supply chains, leads to tragic outcomes in which a patient’s survival is determined by institutional bureaucracy. That would be a much more useful framing!


Last I checked we are not post scarcity, so we still are those thirsty survivors vying over limited resources. There is only a clear "right" answer provided we first agree on ethics.

Whether you propose an industry subsidy, or campaign for regulatory reform, or initiate a cabal of technocrats to accelerate progress, every dollar towards this cause is a dollar not spent elsewhere. Strongly asserting that you have the one right answer doesn't make opportunity costs disappear.


>> Nobody is yet trying to create permanent wastelands inhabited only by robots.

On the contrary-- I think that capability would be enthusiastically adopted by a state like Ukraine, which is fighting an asymmetric defensive war against a larger aggressor with logistical advantages. Keep in mind that a "permanent wasteland" as a buffer was in fact the status quo in parts of the east prior to the Russian invasion in 2022, except the wasteland was maintained by human beings at a high political and economic cost. Today, both Russia and Ukraine create permanent wastelands in the form of extensive minefields, passing those costs on to posterity.

The autonomous No Man's Land--a relatively low-cost deployment of a buffer zone along a state border, in which nothing human may move and live--is likely to be the future of warfare in a world increasingly defined by ethnic conflicts, unchecked inter-state rivalries, and migratory pressures.


"This absolute horror scenario is what happened to writer Michael Berben (not his real name)."

Absolutely didn't happen. You might occasionally credit a story in the NYT or WSJ using an anonymous source—but when one appears in a blog by a "serial entrepreneur active in the content industry," flogging a subscription product aimed at exactly the sort of marginally-employed writer targeted by the story, I would expect the average HN user to be slightly more critical of the claim.


I feel the same. I'm not sure there are many organizations out there applying AI detectors to writers they've had an existing, extensive relationship with, and then also giving a fuck if it pops up as a hit but they have no other complaints about the writing.

My wife's fairly connected to the writing world and I've not heard of this being a thing at all. Doesn't mean it's not anywhere, but if it were widespread I think I'd have heard of it. Most companies are trying to push more use of AI tools in writing, as far as I can tell, and I don't even have insight into content-farm parts of the market, where I assume it's just all AI all the time now.


My experience is that I have to write some things that often have parts that are, for lack of a better word, somewhat boilerplate, e.g. preamble/background/explain some technical term I used/etc. Necessary but not core technical or otherwise differentiated content.

I've used LLMs for this as a sort of first draft. I edit the output but it's a perfectly serviceable way to get some words down and save me an hour or two.


In the realm of technical documentation, too, originality isn't a virtue in itself. (And I say this as a technical writer who doesn't even use LLMs regularly.)

If you needed to throw in a sentence or two ahout, e.g., what load balancing is, you're better off lifting a definition from an authoritative source than trying to come up with a "creative" rephrasing for its own sake. Standard terminology is standard because it works.


Funny because I've been hearing this almost fortnightly on reddit and freelance forums for a few months now.

And the big middleman platforms siding with the client.

Some people prefer their anonymity, especially when they are put into the public sphere over disputes which can cost them future work. Notice the client wasn't named either.


The story is pretty plausible though. I hear students being forced to rewrite genuine original work because of plagiarism detectors, and what happens in schools eventually happens in the workforce.


It would be top irony if this story were AI generated.


"AI text detectors" are complete and utter bullshit rife with egregious false positives, so if they are actually used to cut ties with any writer based on such allegations, there is a near-certain chance that many or most such writers are actually innocent.

This might be a bias-confirming fake anecdote, for sure. But the effect is real.


This is, at best, a wildly misleading headline, as the WGA would in most cases NOT allow it. This was the official WGA statement on AI-generated writing:

“The WGA’s proposal to regulate use of material produced using artificial intelligence or similar technologies ensures the Companies can’t use AI to undermine writers’ working standards including compensation, residuals, separated rights and credits. AI can’t be used as source material, to create MBA-covered writing or rewrite MBA-covered work, and AI-generated text cannot be considered in determining writing credits.

“Our proposal is that writers may not be assigned AI-generated material to adapt, nor may AI software generate covered literary material. In the same way that a studio may point to a Wikipedia article, or other research material, and ask the writer to refer to it, they can make the writer aware of AI-generated content. But, like all research material, it has no role in guild-covered work, nor in the chain of title in the intellectual property.

“It is important to note that AI software does not create anything. It generates a regurgitation of what it's fed. If it's been fed both copyright-protected and public domain content, it cannot distinguish between the two. Its output is not eligible for copyright protection, nor can an AI software program sign a certificate of authorship. To the contrary, plagiarism is a feature of the AI process.”


It seems they are trying to avoid ChatGPT doing to their union members what machine translation has already done to professional (human) translators. As far as I understand it, it's common practice for translation companies to automatically generate "alignments" (a sort of pre-translation of sentences between text in two languages) and then ask the translator to make a full translation based on the alignments- and only pay them for the time the company decides is needed for that task, which is of course less than the time to do a full translation from scratch.

Which sucks badly because the automatically generated pre-translations are often prtty bad and the translator must, in practice, do the full work anyway, and only get paid for less than that.

My source is a bunch of friends and acquaintances who are professional translators; somehow I happen to know at least three distinct groups of them. Also, they're all Greek so the situation may be different for translators between language pairs where automatic translation works better than, e.g. English and Greek.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: