Hacker News new | past | comments | ask | show | jobs | submit | spaintech's comments login

This is great news for those invested in ARM CPUs. I, for one, purchased a Lenovo X13s when it first came out for around 1,900 EUR. Unfortunately, my experience with it on Windows was subpar, and it was even worse on Linux. At the same time, I bought my wife a MacBook Air (not sure if it was an M1 or M2), and it was snappy and very pleasant to use. I bought the Lenovo based on the hype from benchmarks, and it seemed like a capable system initially. However, I couldn't tolerate all its limitations and ended up returning it, as it performed no better than a Chromebook I had bought earlier.

I'm hopeful that Snapdragon will offer an alternative ARM platform for laptops that can handle more than just browsing. As consumers, we need options, and the more, the merrier. I'm still undecided about the short-term success of Snapdragon. For now, I'm betting/waiting on the MacBook Air with an M4 as my daily driver, although I do prefer the Lenovo ThinkPad format.


Wow, I thought the article was recent and thought there was a second release.

I’d like to congratulate the author, this is not only a great source of technical depth in the real of languages, but also a great read as far as the layout and the small details in the graphics made is a source that I just keeps me engaged in the reading. I’s one of those books that feels will be relevant for many-many-years to come!

Congrats and thank you!


I highly recommend using Forth for interactive exploration of hardware. In my opinion, C-Forth and uEForth are particularly well-suited for the ESP32. Even when I choose to develop C-based Baremetal solutions for the ESP32, Forth proves incredibly useful for quickly testing and validating my ideas. This is especially true for verifying physical wiring, as it significantly reduces the time spent troubleshooting potential errors, whether in my own work or when examining a pre-made board.


why not micropython?


I don’t know micropython enough.

I know that it is substantially supported on the ESP32, which is great, but I’m not sure if it’s able to give you the HW access that Forth has, you can extend and build your own drivers on the Forth system, which is my main use case. I like to do things as bare to the metal as possible and C and forth give me that, it’s the same reason I never used the arduino framework.


What a fantastic site! Wish more titea were build this way. It was an awesome experience! I’m an outsider living in Spain for the last 22 years, I think that while the content portrays a reality of the high rise movement in Spain, there is an angle that was missed. It is important to say that the at time draconian land laws pay an important role in the urban landscape. Not to mention that Spain relative young democracy ( we won’t talk about that the today ) also plays am important role into shaping the urban seen. Lots of change in the government parties have also lead to constrains and or retains during several periods. Ok his has mean that different areas would grow artificially dependent on the “favoritism” at play at the moment through favorable zoning changes or subsidized public housing. Albeit and thankfully at the moment, Spain has not discovers property taxes like the US, it contains growth with little planning and driven by massive speculation which has been an economic driver for major cities even with relative low occupation rates. There is also the fundamental culture of buy vs rent that drives for more flats been built to keep up with the demand. On the plus side, one the the biggest differences I have seen in Spain is a relative wide spread economic demographic in the cities, where you don’t see the major changes like most other major city across the world, it happens but it’s neither common or as drastic as the rest.


> Spain has not discovers property taxes like the US

I wonder what you mean by this, since homeowners in Spain do pay a property tax with similar rates to the US's (around 1% of assessed value).


Anyone know more information about how this was built?


Unpopular opinion but I hate the design of the site. The transitions are too slow and the entire site is inaccessible to anyone with vertigo, visual processing or balance disorders. Needless to say it also doesn't respect the browser's `prefers-reduced-motion` preference.

A massive case of style over substance for me.


I agree and I don't have vertigo or any other disorder. This is just bad UX.

The charts and graphs are also not very good. You can't search, sort, or filter in any way.

Also no light mode which is also terrible UX. People with astigmatism can have a hard time with dark mode.

https://medium.com/@h_locke/why-dark-mode-causes-more-access...


Agreed.

Another downside is that it consumes huge amounts of memory.


my philosophy: its ok to have fun on the internet


So what? Memory exists to be used. This isn't Slack, you don't have it open in perpetuity.


If we built everything to tiptoe around every 0.01% disorder out there, we wouldn't have anything nice.


8% of people in the US have some visual impairment; 18% of those over 65.

https://hpi.georgetown.edu/visual/


Well, never the less, these deal are all done with a set of vested interests that in many occasion don’t necessarily align with neither the customer’s (why companies should even exist) or that of their employees ( the motor of companies success) but more in the realm of financial spreadsheets, where, due diligence sways in the direction of those stakeholders in the deal… And I purposely look at it this way as no to take any given side without the details.

Non the less, IMO, the only winners here are the stockholders, Splunk as a business, and many others with the same model of “schema selling” are in a high risks stake in the new erra of AI/LLMs.

If you consider the accelerating world of AI we are living in, and the emergence/trend towards Domain Specific Large Language Models (DS-LLMs) and advancement like MEMgpt, they represent a transformative approach to data analytics. Instead of using a schema-specific model, as seen in tools like Splunk which extract and transform data into a predefined schema, DS-LLMs offer a flexible, continuously trained approach. They not only analyze data but also learn from it in real-time. The “actors”, or bots, leveraging tech like MEMgpt that not only collect but also learn from the vast streams of data are far more capable than those schema models. As these models self-train and trade knowledge, they are poised to provide insights more organically aligned with the data’s inherent structure, rather than a pre-defined schema. This means businesses could potentially gain deeper, more intuitive insights without the confines of structured data models. With the rapid pace of innovation in the AI sector, it’s worth questioning whether traditional, schema-based solutions will be able to keep up with the dynamic learning capabilities of DS-LLMs. I still wonder who got the better deal here.

Wishing the best to all the Splunk employees moving forward.


Is this a coincidence to the NVDIA announcement that they will focusing on ARM chips for the PC? Acquisitions? No source, just pointing out the causality in the timing. I’m not suggesting NVIDIA is buying them, I’m saying that someone could be considering a buy to counter the NVIDIA move.


Hey Torrent, can you provide a reliable source for that info? Recently, I’ve noticed a surge in Domain Specific VMs even in userland for Linux (like Google Falcon). This has led me to wonder if a boot loader combined with a bytecode VM could optimize performance in not only games but also various applications. I tried checking an Xbox game binary for traces of a VM. Given the potential for encryption obfuscation, I assumed the binaries might be encrypted and didn’t dig deeper. I’m not familiar with the gaming or pirating scenes, so this is all novel to me. But I’m keen on exploring this idea further. What can HyperV bring to the table in this context?


Form me, one of the significant criticisms of cryptocurrencies, particularly Bitcoin, is their environmental impact. The process of “mining” Bitcoin requires significant computational power, leading to a large carbon footprint. Recent estimates suggest that the power consumption of the global Bitcoin network is comparable to the energy consumption of entire countries, like Finland and Norway. This has led to concerns about the sustainability of cryptocurrencies in their current form. Even with Prove of Stake, this is still a very high price to pay for a system that is slow, and mostly used as a speculative resource. I can’t seem to find the “real” value of Crypto, other than the get rich schemes we see, and how they end…


Its energy usage is very good because it can provide people who want to invest in renewables a source of revenue in the market where energy prices already go negative at times of peak production.


This is a fascinating approach. I’m working on something similar but as part of the feedback loop, as you said, rewriting history with transactional data as part of the context window. I feel as though the LLM and the NLP could potentially be a more realizable interface to structured data, well, I should say, this is the idea we are exploring. For us, as data is created (within a certain context of the business) we extract the data, generate the embeddings and build out the vector database as to:

Pre and Post-Processing:

- Post-Processing: After the main model responds, a post-processor takes over, automatically generating memories from the conversation and saving them. This ensures that important context is stored without burdening the primary model with these tasks. We also execute any relevan business logic as part of the request, then feed that back to the systems…

- Pre-Processing: Before a new input is sent to the main model, a pre-processor checks saved memories and injects relevant context. * executes logic * It’s as if this pre-processor gives the main model a “refresher” on prior conversations, preparing it to provide more informed and consistent responses.


Great site! I kind of have a predisposition to summarize linux performance, be it tuning or monitoring, taking a deep breath…

This is such a depth subject, with a long list of variety of observability tools. At minimum, make sure you know deeply uptime, dmesg, and iostat. These are your friends to give you a glimpse into various system aspects like load, memory, CPU, and more, enabling a diagnostic overview of system health. This is what I call, the “let me take a look at it” check list, 1st page of 100!

When emphasizing methodologies for performance analysis I recommend careful benchmarking to holistically evaluate system behavior and workload characteristics. with before and after scenarios. Make smaller changes first, then gradually compound what you think will provide benefits. Remember, labs and production never behave the same.

This is where it gets tricky, CPU profiling with tools like “perf” and visual aids like flame graphs enable targeted analysis of CPU activity, along with tracking hardware events to optimize computational efficiency. You need to know more than “it’s the app man, was fine until the latest release from development”

When you are the admin and speaking to a developer; Linux, tools like ftrace and BPF come into play, allowing for detailed tracking of kernel function execution and system calls, which can be vital in troubleshooting and performance optimization. You can also be the developer, varying the admin’s intuition… as the saying goes, trust but verify.

When it’s your code, then you better know BPF! It not only facilitates efficient in-kernel tracing but also propels the development of advanced custom profiling tools through bcc and bpftrace, offering deeper insights into system performance.

Last comment, it’s %$$% hard! Tuning means you need to navigate through adjusting a myriad of system components and kernel parameters, from CPUs and memory to network settings, aiming to optimize performance and reliability across various system workloads, else you can blame it on the network! :D

Really, you need to have a good behavioral attitude at change management, as chasing code or kernel parameters could be a daunting task that just overwhelms everyone in a moment where you might be time constrained and the preasure could lead to a higher degree of human errors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: