Hacker News new | past | comments | ask | show | jobs | submit | robomartin's comments login

I disagree with the premise of this article. Modern AI can absolutely be very useful and even disruptive when designing FPGA's. Of course, it isn't there today. That does not mean this isn't a solution who's time has come.

I have been working on FPGA's and, in general, programmable logic, for somewhere around thirty years (started with Intel programmable logic chips like the 5C090 [0] for real time video processing circuits.

I completely skipped over the whole High Level Synthesis (HLS) era that tried to use C, etc. for FPGA design. I stuck with Verilog and developed custom tools to speed-up my work. My logic was simple: If you try to pound a square peg into a round hole, you might get it done yet, the result will be a mess.

FPGA development is hardware development. Not software. If you cannot design digital circuits to begin with, no amount of help from a C-to-Verilog tool is going to get you the kind of performance (both in terms of time and resources) that a hardware designer can squeeze out of the chip.

This is not very different from using a language like Python vs. C or C++ to write software. Python "democratizes" software development at a cost of 70x slower performance and 70x greater energy consumption. Sure, there are places where Python makes sense. I'll admit that much.

Going back to FPGA circuit design, the issue likely has to do with the type, content and approach to training. Once again, the output isn't software; the end product isn't software.

I have been looking into applying my experience in FPGA's across the entire modern AI landscape. I have a number of ideas, none well-formed enough to even begin to consider launching a startup in the sector. Before I do that I need to run through lots of experiments to understand how to approach it.

[0] https://www.cpu-galaxy.at/cpu/ram%20rom%20eprom/other_intel_...


My guess is that it is far more likely that someone from Apple who actually cares will see this on HN and forward it to he appropriate stakeholders than a random letter reaching Tim.

I might still try.


> I receive and make about 1-2 accidental calls a week, it hasn’t been a big deal for me.

A couple of accidental calls a week is too much when you are dealing with people across many time zones. You don't want to call the CEO of a company is Dubai at 3 AM because your phone is too stupid to prevent this from happening when the person you were speaking to locally ended the call ten milliseconds before you.

I'm happy to hear this isn't a problem for you. Good. Please don't assume the same is the case for others. A google search easily reveals this isn't just "ranting" as another comment put it.

This issue can make you look unprofessional and even inconsiderate. The person you woke up in New York, London, Singapore or Buenos Aires isn't going to think "this only happens a few times a month".

Again, I tend to deal with executives from mid to large enterprises world wide. I do not ever want to call any one of them by accident. My US $1,500 phone should be able to make this possible while not completely destroying other usability elements.

As I said in my post, the solutions are not complex at all. No need to train a new AI agent or anything like that. Just some sensible rules and time-based settings takes care of it.


> 35 U.S.C. § 102. Obviousness means that the claimed invention as a whole would have been obvious to a person of skill in the art, under 35 U.S.C. § 103.

This is the part that always gets me. Having read through over a couple thousand patents, it is my opinion that the vast majority of them are obvious to anyone skilled in the art. Some of them are so ridiculous that a university student about to finish a relevant degree would consider the claims obvious.

This bothers me deeply because patents that should not have been granted force us to play a patent arms race. If the other side has an axe, you have to have an equal or better axe...and it gets nastier from there.

In some industry sectors you'd be crazy to put out a product without ensuring you have enough legal weapons of IP war to protect yourself from other IP as well as slow down or eliminate copy cats and competitors who will gladly take advantage your your "R" (Research) at zero cost. In "R&D" the "R" is usually the most expensive phase. Once you know what you are building the "D" tends to be simpler, shorter and costs significantly less.

It stands to reason that getting a patent should become more and more difficult over time. As more is invented the "art" and those skilled in it become more sophisticated. Which means the rate of true invention should (not fake invention) should come down to an asymptotic level. We should see less true patents per year, not more fluff patents per year.


The only thing that matters are the claims. You have to read and analyze them in order to understand where the legal IP fence is being built. If you are not used to reading and dissecting patents the claims can be difficult to understand.


I absolutely remember encountering one of these at a friend's house. I clearly remember the detail about moving the pieces out of the way as needed to make moves. I do not remember the noise of the xy mechanism.

The noise is likely the result of the recording method. I don't remember it being that loud at all.

Who negotiates these large domain acquisition deals?

I imagine that without a professional negotiator in the process many domain owners would easily leave lots of money on the table.


I have always avoided Qt for any projects because of the incomprehensible licensing model that could very easily create a legal minefield for your product.

No. Thanks.


It's just LGPL if you make any standard desktop app. Like you can make pretty much the entirety of KDE with the LGPL parts of Qt.


Here in Los Angeles we have seen dozens of them. Our reaction is consistent: It's ugly and ridiculous. To my wife the design reminds her of a roll-top garbage can of some sort.

My kids probably had the best comment: If Tesla had designed a real truck they would have sold millions.

Keep in mind this is the comment of teenagers who don't have a sense of the size and scale of markets. The point, however, should not be missed: There was an opportunity to enter a truck into the truck market, not an Ikea trash can on wheels.

Sometimes it is a good idea to listen to kids. I remember when one of Apple's original guiding principles of OS design was to make the computers usable by anyone, even young kids. A kid, in this case, does not see the utility of a truck that does not seem to fit the "form and function" of a truck, like an F150 or variants by other manufacturers.


I posted this about a week ago:

https://news.ycombinator.com/item?id=41816598

This has been done for decades in digital circuits, FPGA’s, Digital Signal Processing, etc. Floating point is both resource and power intensive and using FP without the use of dedicated FP processing hardware is something that has been avoided and done without for decades unless absolutely necessary.


Right, the ML people are learning, slowly, about the importance of optimizing for silicon simplicity, not just reduction of symbols in linear algebra.

Their rediscovery of fixed point was bad enough but the “omg if we represent poses as quaternions everything works better” makes any game engine dev for the last 30 years explode.


a lot of things in the ML research space are rebranding an old concept w a new name as “novel”


Explain more for the uninitiated please.


Not sure there's much to explain. Using integers for math in digital circuits is far more resource and computationally efficient than floating-point math. It has been decades since I did the math on the difference. I'll just guess that it could easily be an order of magnitude better across both metrics.

At basic level it is very simple: A 10 bit bus gives you the ability to represent numbers between 0 and 1 with a resolution of approximately 0.001. 12 bits would be four times better. Integer circuits can do the math in one clock cycle. Hardware multipliers do the same. To rescale the numbers after multiplication you just take the N high bits, where N is your bus width; which is a zero clock-cycle operation. Etc.

In training a neural network, the back propagation math can be implemented using almost the same logic used for a polyphase FIR filter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: