Hacker News new | past | comments | ask | show | jobs | submit login
Tensorbook (lambdalabs.com)
70 points by tosh on Dec 28, 2022 | hide | past | favorite | 62 comments



This thing is absurd. I work in ML research and I use my Macbook Air M1 to train LLMs from anywhere, but I would never let them touch my local machine. Even if they'd run (most of them wouldn't - even on a 3080 Ti), they would ruin my battery life. So I simply connect to a GPU server via VPN and ssh + vs code remote. Only students or beginner enthusiasts might think that training this stuff on the go would actually need a beefy laptop.


I agree with you. I bought an at home GPU rig 5 years ago, and it was good for a while. Then I realized how much time I could save myself by paying Google Colab $10/month for improved GPU access - close to zero config hassles. Running in Jupiter style notebooks is not optimal, but it is good enough. I am in my 70s and maybe it is because of my age, but I look critically at all my activities and always ask “is this really what I want to be spending time in today?”


> I am in my 70s

You sir are very inspiring


This product is AI/ML equivalent or "gaming" or "military grade" products, all of which are just over-priced crap with RGB/Camo added.


Is it though? I personally prefer local development and test out my code instantly, especially when internet is flaky. Once you have confidence in your code, you train it in the cloud.


Develop on a tiny slice of you dataset locally of you must. Then train properly elsewhere (in the cloud of your choice)


I can always test out my code instantly. With the vs code remote ssh plugin it's exactly like running everything locally, but in reality only the GUI runs on my laptop. The necessary connection speeds are tiny compared to what you'd need on the go to sync models and datasets.


I develop my code directly on the HPC cluster. What do you even mean with testing your code instantly? You don't have to run a batch job every time you need to test something. You can use interactive mode.


You will probably keep your dataset on Cloud storage and Google Cloud has much higher Network speed, it makes things much faster than working locally

Obviously if you have your own local data center then you would use that


You could do this via Jupiter notebooks and visual studio code.


Transformer adapters/fine-tuning would be a good fit, especially while traveling with a power source next to you.


By that reasoning, why does the M1 even need a GPU?


You couldn't even watch modern youtube videos on most cpus were it not for integrated accelerators. The M1 does exactly that, but in a power efficient and reasonably performant way.


M1 chip is both a CPU/GPU, all computers need at least a limited GPU


You need some sort of GPU to run any OS in a decent way


Why should anyone buy a laptop for 5k, with a 600 dollar GPU in it? With that money you could buy a normal laptop, and rent the GPU or even buy the GPU and place it in a server.

Also the 500 dollar charge for Windows 10 and pre-installed CUDA, cuDNN, and Tensorflow is straight up scamming people.


Its pretty ironic considering this company also sells cloud GPUs (usually much cheaper than AWS) and preconfigured GPU racks, so they clearly know how the market works in this space. I really wish to have been a fly on the wall when they decided to add this to their product roster.


Because:

> performance up to 4x faster than Apple’s M1 Max


> up to 4x

Thats not a minimum performance guarantee. They’re talking about an upper bound. It’s possible that the 4x is some niche benchmark.


That's just speculation. And even if it's 2x, it's still a very welcome improvement.


Certainly, but without knowing the workload that was used for the benchmark it’s a meaningless marketing number.


What a weird product. So it's just a regular Razer gaming laptop, but painted differently to make it look like a MacBook, and with a ridiculous surcharge on top of it because "ML"? Frankly, if I were in the ML field I'd just buy the M2 MacBook Air (more than enough for prototyping and comparable CPU performance, with better display and battery) or some other ultracompact business laptop with decent screen and put the rest of the money into Colab or some other cloud GPU service.


Zephyrus G15 is a much better value for local ML training/inference. Moreover, mobile 4080 is just around the corner with the desktop 3090 performance, so they likely need to clear out old stock. Razer is also not known for long lifetime.


18 months ago I purchased a 17" Lenovo Legion with RTX3070, R7 5800H, 1TB SSD, and 32GB ram for $1500 - brand new. It's a bit heavy, but works just fine - I use it for smaller DL projects, too. $5k for the equivalent Tensorbook seems a bit steep, no?


It sounds amazing to me that today such a powerful machine can be had for a measly $1.5k, and also that it is considered only suitable for “smaller” projects. I remember reading about (a very expensive, no doubt) IBM System/360 that was used by NASA in 1969 (incidentally, the year UNIX was born), and also how people paid upwards of $2k (in 80s money) for an IBM PC with a 4.77 MHz 8088 chip in it.


I remember paying almost $5k for a 133 MHz Desktop computer back in 1995 - came with with Windows 95 which was just released. It was used almost exclusively for office stuff - and was woefully outdated just a couple of years later. At which point the new computer we bought "only" cost around $2k - $2.5k.


Agreed. I just purchased a Maingear Vector Pro 2 with a mobile 3080 Ti (16 GB, ~165W power) for $1700. It's the biggest purchase I've made but I can quickly prototype ML models and it make research projects so much snappier.


"Up to 4x faster than Apple M1 Max" they say ...

Aah, that famous marketing "up to" ! So I guess that's in certain cherry-picked scenarios then ?

Also what they don't tell you is that the Apple will happily do ML for hours on end ON BATTERY with zero performance impact. Whilst an x86 laptop will need to be permanently tethered to your nearest power socket in order to obtain top performance. Otherwise you will suffer a performance drop and kill off your battery life.

I know the usual Apple bashers gonna hate, but fact of life its that for most people Apple have done a sterling job on their Apple Silicon laptops. No doubt more to come with the M2 in due course.


Well, I am a big proponent for Apple products, but for ML-related work Apple Silicon is not the best good value for money currently. This will likely change in the future, when we have larger matrix accelerators and ML-specialized data types in the GPU/AMX, but Apple is not there yet.


>ON BATTERY

Why would I even care about that for ML?

>I know the usual Apple bashers

Apple fanboys found one benchmarks which looks fine (Performance per watt) and decided it applies to everything and the only thing that ever matters - even when it's totally irrelevant.

It used to be they cared about Mhz, until it became 'the Mhz myth' when Apple was losing.


[flagged]


>If you're going to be buying a laptop to leave it plugged into the mains the whole time when doing intensive tasks, then you might as well buy a workstation ?

I'd rather it plugged into the mains so I can also plug it into a nice big monitor and a large keyboard and a mouse, so I can use it like a human being and not a computer appendage.

A portable device is for BYOD. Actually working with it while on the go? I'd never do this unless it's an emergency.

[EDIT: I did not and would not have flagged parent comment. It's a bit direct but I've seen worse here.]


> If you're going to be buying a laptop to leave it plugged into the mains the whole time when doing intensive tasks

Well, this is how most people use laptops, isn't it? The main benefit of portability is that you can move to another location in between the tasks, with little to no set up and teardown, and without having to suspend or power off the machine. Few people do anything on their laptops while walking around - the usual use case is taking your computer to a meeting room or a cafe, and the first order of business for anyone is usually to find a power outlet.


> Well, this is how most people use laptops, isn't it?

No, it is not how most people use laptops.

For example, airline travel.

Let's start at the airport first, shall we ?

Most airports, for "security reasons" don't have many power sockets hanging around. And if they do, they are generally at inconvenient centralised charging desks. So that's a bit of a pain if you want to work somewhere reasonably quiet and away from the hoards.

Of course you may be an Airline Elite member, in which case you get access to the lounges. But lounges are generally busy places. Many don't have that many power sockets.You also need to wrestle with power-adapters etc. Finally, your laptop charging cord is a trip-hazard waiting to happen.

Then let's get on board the aircraft.

The majority of short-haul aircraft do not have any sort of charging mechanism available, so that's that ruled out.

Meanwhile long-haul wide-body aircraft will generally have power (and more modern ones USB, although the USB power generally will be limited and insufficient for a hungry laptop).

So overall, in reality, the only time you'll be able to comfortably use your laptop on mains power during air travel is on a long-haul wide-body flight where you are seated for 14+ hours (other than toilet breaks etc.).

> the first order of business for anyone is usually to find a power outlet

The whole point is that Apple makes that "first order of business" a deprecated relic of the past.

With an Apple silicon laptop you can turn up and work most or all of the day without having to concern yourself where the nearest power outlet is.


Sadly Apple Silicon is still very much difficult to properly configure for some areas of ML whilst leveraging the GPU (object detection for example), so having that nvidia really makes the set up smoother.


Wouldn't really want to do heavy computational work on any laptop. Thermal throttling will gut your performance. Laptops just don't have the cooling necessary to have it any other way.


But that's exactly where newer Apple laptops shine. They don't have the throttling issues associated with x86 simply because they use 2-4 times less power for the same level of performance. E.g. my M1 Max CPU maxes out at 5-6 watts per core, while delivering performance comparable to that of an Intel P-core running at 4.5-4.8 ghz. In multicore, that's 40 watts for the level of performance where Intel or AMD would somewhere between 80 and 150 watts, depending on the configuration. And 40 watts is something that a laptop can easily dissipate.


The efficiency of Apple silicon is a matter of fact now, however isn't nVidia with its cuda still king in this segment? Please correct me if I'm wrong but doing ml/dl on CPU instead of GPU seems to be the least efficient way to go about it?


Note that I was talking about the CPU specifically. GPU on Apple is also more efficient (approx 0.25TFLOPs/watt for M1 series), but Apple GPUs lack support for ML-optimized FP representation (primary reason why Nvidia is so good in this domain). Apple does have a matrix coprocessor which offers excellent performance/watt for inference, but these units are relatively small and only offer limited aggregated performance.

I think it’s just a question of time until Apple offers hardware support for BFLOAT and other formats on the GPU and AMX (they already have BFLOAT16 in the CPU), at which point their ML performance will improve dramatically.


> nVidia with its cuda still king in this segment

I suspect this will gradually change, perhaps especially now a lot of effort has been made to bring tooling such as PyTorch over to Apple silicon.

> on CPU instead of GPU

But Apple isn't doing it on CPU.

You are thinking in terms of x86 discrete components.

Apple Silicon is a fully integrated architecture including unified memory. That's what makes it so efficient.


Yes, Nvidia still obliterates M1/M2 in Deep Learning. M1 is close to GTX1650 in real-world DL workloads though in theory based on TFlops it should be around GTX1070.


> Apple will happily do ML for hours on end ON BATTERY

At 1/10th of the speed...


> At 1/10th of the speed...

Yawn.

As has already been clearly explained numerous times in this discussion, e.g. @ribit[1] below, there is no throttling on Apple Silicon when running on battery.

[1] https://news.ycombinator.com/item?id=34159456


> there is no throttling on Apple Silicon when running on battery

... yet. The computational demand of typical work will increase to eat up the extra capacity, as it always does - and the only way for a laptop to handle the same workload plugged or unplugged is to cap its plugged-in performance.


It's not about throttling, it's about mobile 3080Ti vs M1 GPU performance. M1 hits at around GTX1650 for Deep Learning workloads in GPU benchmarks.


You can buy the exact same laptop, but $1300 cheaper if you can install Linux yourself: https://www.razer.com/gaming-laptops/Razer-Blade-15/RZ09-042...

Or get a more powerful CPU for $100 cheaper than the Razer with the Asus ROG: https://www.bestbuy.com/site/asus-rog-16-wqxga-165hz-gaming-...


But you won't get the premium support when tensorflow can't find you GPU[0] or Jupyter isn't opening in your browser.

[0] https://lambdalabs.com/deep-learning/laptops/tensorbook/supp...


It may still make sense for institutional clients to pay the $1.3k premium for the guaranteed linux support


Possibly, though I don't know of any companies that are training ML models on laptop hardware.


Anecdotally I've had terrible experience with Razer. I felt like their laptop was a time bomb. About 3 weeks after the warranty expired the screen died (their warranty was also only for one year). When I went to their HQ to ask for repairs they said a screen replacement would cost $1000. I went out myself and did it for $150. Needless to say I don't think I'll be buying anything razer from now on. I'm just glad I bought it on discount.


R.I.P. battery.

Personally I've been using Brev [1] to do my cloud training, you get a cloud GPU instance that you can upgrade/downgrade on the fly, and makes supports VS Code out of the box.

[1] https://brev.dev/

(I'm not affiliated with Brev)


Why would I train ML models on a laptop?


This product is seriously late?

Surely you want this before Christmas so more-money-than-sense parents can buy it to help their failing kids pass computer class?

"It's 100% cause everyone else has a better GPU mom. And the professor hates me. No I can't play Fortnite on a Mac I need it to unwind."


1 year warranty does not make you feel like they're confident in their build quality.


Do not buy one of these. It is functionally the same as a Razer gaming laptop which are notorious for swelling batteries. I had one that almost blew up. Had to remove it entirely.


Yes agreed. By all means: DO NOT BUY RAZER.

My two friends and I own RAZER laptops, and the batteries have swollen up in them. It's impossible to contact support too, and you won't be able to get it replaced within a year of warranty.



Thoughts on fan noise and thermal throttling? My M1 Max is cool to the touch and dead silent.


>Thoughts on fan noise and thermal throttling

I think your answer is on this Reddit comment[1], I quote:

"I have to step away when it is running ML models. Fan sound is loud"

There also appear to be various comments on Lambda's own forums on the subject of fans...[2][3]

     [1]https://www.reddit.com/r/razer/comments/vhqjkq/comment/idila6y/?utm_source=share&utm_medium=web2x&context=3
     [2]https://deeptalk.lambdalabs.com/t/tensorbook-fan-speed-too-loud-or-too-high-control-tensorbook-fan-speed/742
     [3]https://deeptalk.lambdalabs.com/t/tensorbook-produces-loud-grinding-noise-from-the-fan-back-of-the-laptop/1620


Why is right shift almost as big as the space bar?


That's a right shift key, it's mostly useless these days.

But as it gets smaller the risk of it becoming impacted and infected goes up so it's really an evolutionary minima. Oh wait that's an appendix.

No idea.


Fun fact: The current theory is that the appendix isn't useless as previously thought but actually functions as a safe house for beneficial bacteria to repopulate the body after illness. If I remember correctly they've done studies and found people who have had their appendixes removed have weakened immune systems and take longer to recover than those who haven't had it removed.

https://www.sciencedaily.com/releases/2017/01/170109162333.h...


Makes sense! I've had mine out and I'd consider my immune system the bad end of normal.

Although, I suppose if my immune system was so good before the appendicitis I wouldn't have gotten it in the first place?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: