Hacker News new | comments | show | ask | jobs | submit | sparkie's comments login


It's pretty trivial to set up a template, a data source, and have the mail merge feature generate your reports for you. If you F4 in Writer, pick a data source you can drag&drop fields or queries directly into the document where you want them. The rest is just a case of trivial document editing which you're more than likely already familiar with.

Your data source can be anything from a csv (spreadsheet), an .odb or an external data connection to a database (eg, postgres).

Agreed, and you could pair it with Ledger[0] as a way to keep track of the accounting.

0 - http://www.ledger-cli.org/

Another obvious example that can benefit from the setup is a crypto coprocessor, whereby private keys are side-loaded onto the coprocessor by means of a separate hardware bus, such that the key is never exposed to the CPU or main memory, and the coprocessor can handle all encryption/decryption with said keys.


FPGA+CPU type setup is not cheap to develop or deploy.

Crypto with modern CPU such as pentium works wells. Also almost all the ARM Soc comes with crypto co-processor already with private memory that's not share with main CPU.

Not too sure about the value add there.

The value is not about performance, but security. Crypto done by modern CPUs is susceptible to many side-channel attacks. Having a separate SoC is nice, but the SoC is not upgradable with new algorithms or patches if there are any problems found in its implementation.

Also, just having a separate memory space for the purpose of computation is not sufficient. I'm arguing for an entirely separate hardware bus to load keys onto the chip, such that they never exist in CPU, memory, or the storage for the machine being used for general purpose computation, because keys there can be obtained via side-channels if other software (such as the kernel) is exploited.

Kernel is ideal for creating small embedded languages, where you can selectively expose parts of the parent language to perform general purpose computation, without giving any access to sensitive code.

Kernel is like Scheme, except environments are first class objects which you can mutate and pass around. A typical use for such environment is for the second argument of $remote-eval, to limit the bindings available to the evaluator. If you treat an eDSL as a set of symbols representing it's vocabulary and grammar, then you bind them to a new environment with $bindings-environment, then passing a piece of code X to eval with this resulting argument as the second environment, will ensure X can only access those bindings (and built-in symbols which result from parsing Kernel, such as numbers), and nothing else.

There's a function make-kernel-standard-environment for easy self-hosting.

Trivial examples:

    ($define! x 1)
    ($remote-eval (+ x 2) (make-kernel-standard-environment))
    > error: unbound symbol: x

    ($define! y 2)
    ($remote-eval (+ x y) (make-environment ($bindings->environment (x 1))
    > error: unbound symbol: y

    ($remote-eval (+ x y) ($bindings->environment (x 1) (y 2) (+ +)))
    > 3

    ($define! x 1)
    ($define! y 2)
    ($remote-eval (+ x y) (get-current-environment))
    > 3
$remote-eval is a helper function in the standard library which evaluates the second argument in the dynamic environment to get the target environment, then evaluates the first argument in that. The purpose of this is to provide a blank static-environment for o to be evaulated, so no bindings from the current static environment where $remote-eval is called from are made accessible to the first argument.

   ($define! $remote-eval ($vau (o e) d (eval o (eval e d))))
If you're familiar with Scheme, you may notice the absensce of quote.

And contrary to the opinions in the article, Kernel is the most expressively powerful language I know, and I'd recommend everyone learn it. You have a powerful but small and simple core language, from which you can derive your DSLs with as little or much power as you want them to have.


Thanks! I will check out Kernel.

The ability to restrict the environment is key. Creating a DSL in standard Lisp would create a more powerful environment: the DSL would include all Lisp features plus the domain features.


But how do you implement the small, concise languages without the gotos and side-effects to begin with? You're gonna need a language with sufficient expressiveness and the capability to perform side-effects to do so. Implementing a new language entirely without side-effects is infeasable. Replacing GOTO with some structured variant is trivial though.


> If Go is bad, why is it so popular? The cognitive dissonance is particularly difficult to contend with this time around.

> It was developed at the largest internet company on earth by some of the most accomplished programmers in history.

Paradox answered.

New languages are developed almost daily; as hobbies, by students, by academics and so forth. Most of them you've never heard of, will never take the time to look at, because you don't know what gains the language can bring and spending hours reading the research that went into it aren't worth it until you do. Particularly if the new language introduces something new, which requires you to actually learn something, rather than reimagine something you already know in a different way.

But, if you see that someone famous wrote the paper, it might be worth a read, because they have a reputation, and the no-names who write the other languages every day do not have celebrity status yet.

And it only takes a few of those who are curious enough to read it, to break it down into small bits and publish in a blog what benefits other people might get from using it, which will trigger a wider adoption of people learning.

The argument "that everyone is stupid", is what many in the programming community are willing to do, because how many of the other 364 programming languages developed this year did all of the Go enthusiasts read about before jumping on the bandwagon?


The Google name surely gets a lot of people's attention. But if the language actually sucked, it's not going to matter. This is a tool that you have to use EVERY DAY. The Google name is far from a guaranteed hit maker. See Google Plus, Google Wave, etc.

That's like saying Dale Earnhardt could recommend driving a Gremlin... sure, it would get a lot of people to go for a test drive, and a lot of those people would then run away.

I fully admit I clinked on the link to the Go website purely because it was from a big name... the exact same thing I did for Hack and Swift and Rust. But I don't write code in Hack or Swift or Rust. I write code in Go.

Sure, I may have missed some really great language put out by an obscure name that never made it to Hacker News. But then again, evidently so did everyone else.

So why do I write code in Go? Because the language and the ecosystem around it are really good. Because I feel like I can understand any piece of code written in it without needing a PhD from Stanford. Because I feel like it pushes people towards simple, straightforward solutions.

Past that first click, the only thing Google means to me is a guaranteed check to pay the bills of all the great people working on the project.


It's not that golang is bad, just that it's not good. Understanding the difference between those two is understanding why golang is popular, why Java is popular, why C is popular, why any number of other mediocre things are popular. Golang gets a lot of stuff right, but most of the stuff it gets right isn't to do with the intrinsic design of the language.

Golang is good only insomuch as C is good, because it's clearly inspired by C. That's why golang is popular, because it's very similar to something everyone already knows. The switching cost is low and the tooling is really nice. Some things are nicer than in other C-like languages, some things aren't. The bottom line is that language innovation these days is largely not happening in C-like languages, so most language innovations aren't popular. Languages like Swift subvert this rule, however Swift has its own (arguably more influential) corporate backing.

Go isn't bad, it's easy. Very easy. Easier than almost everything else. That's why it's popular.


The Go language is an excellent tool for shipping maintainable server-side applications.


Maintainable in the short-term.


What makes you say that?


Because it has a few fundamental quirks that make it a poor choice for enormous systems. It uses mutable data by default, the type system is not all that strong, and it has no facilities for making abstractions due to it being too simple. The developers of golang consciously throw out like 30 years of research in programming language theory because the idea of a language that gets "back to the basics" is appealing to a large group of (mostly good, experienced) programmers even though it's not a particularly good idea.

The first answer to this Quora question by one of the developers of D explains a few other points and also what golang is really good at:



All of the most popular languages have mutable data by default. 99.99% of all enormous systems were written with mutable by default languages.

The type system not being "strong" simply means that you put logic in functions and methods instead of magic parts of the type system.

And Go can absolutely make abstractions. The standard library is full of abstractions. Every Go package is an abstraction. Hell, every function is an abstraction.


> 99.99% of all enormous systems were written with mutable by default languages.

And are they easy to maintain?


Not so bad. And the one I'm working on in Go is definitely helped by being in a simpler language. I don't really think immutable data would be a huge boon to maintainability. Some sure, but I think it would also complicate some of the code.


So you think syntactical simplicity is more important to the maintainability of a large system than whether or not it uses immutable data? Ok.

Because immutability is a massive boon to horizontal scaling and maintainability in ways that are simply unattainable in languages where it's not encouraged. It makes concurrency and parallelism observably trivial, it makes your code safer. Assuming you're using a strictly evaluated (AKA eager, greedy, whatever else you know to mean not-lazy) language with immutable data, race conditions are eliminated.

Let's review that again. In a large distributed codebase written in an immutable language, (let's say Elixir, it's fashionable at the moment and good for doing this kind of stuff) concurrency and parallelism are trivial and race conditions are nonexistent. This is even before we get to type safety and stuff like the actor model. That's huge. That's the immutability advantage. I could go on but it's late and I have school tomorrow.


I very much dislike that invalid states are left representable in idiomatic go error handling.

I trust Go's authors more than others because I trust their experience. They have solved hard problems and I believe have wisdom about how to develop software well. There are certainly people who have done that as well but aren't famous. I hope I will discover their work so I can learn from it. But since Go's marketing is much better, I will use it and benefit from the learned experience of its creators.


Indeed. Go is very polished. The tooling is excellent. This stuff matters a lot.


> Paradox answered.

I've heard Go being advocated twice for development teams and both times the "Google" part was at the top. Second was "binary deployment and static compilation" one time, the other time it was "faster than Python and concurrency". So yeah, I believe it.

Even people who could not articulate why Go would be appropriate for a project/task were jumping on the bandwaggon because of "Google does it so it must be good". Which, well, is not a terrible heuristic for some.


Anything the user decides to put in the refrigerator, provided we put warnings in small text on the document that accompanies the refrigerator for catogories of items we know to be unfit for the refrigerator so as to cover our asses if the user does not heed our warnings.

In terms of software, trying to decide up front for the user what they can do is the wrong approach. Even constraining them is flawed, with the exception of constraints for liability issues. What the user needs is the meas to express their decision making as to whether something should go into the fridge, so they need an input mechanism capable of doing selection, which has access to the context in which the item might be being placed into the fridge.

Does this sound like any configuration format you've ever used? The typical key-value stores which attempt to give you a declarative list of what you can put where, without the means to compute whether or not it might be a good idea first are almost ubiquitous, and the number of programs which take the right approach: give the user a proper, expressive programming language to configure the software, are few.

Just as it happens, we've developed ACME Refrigerator v2.0, which now allows you to store chocolate bars in it; a feature we missed in v1.0. Upgrade now for just $400.

The absurity of this idea with a fridge seems lost in software because we can upgrade at almost zero cost in just a few minutes, but that's only masking what is an obvious design flaw.

Which raises the question: What can you put in a software package definition?


Had similar experience. Had allergic reaction to penicillin first prescription, which triggered my full body to come out in itchy hives. Went through 3 courses of different antibiotics until I was fit enough to get out of bed. Had bad guts for a couple of years following, which was usually an alternation between diarrhea and constipation. Sometimes I'd sit on the throne several times a day and no movement, except mucus or eventually, blood, as I'd developed piles.

I went on a super high fibre diet, ate as much low calorie food as I could a day to keep things moving. Ate lots of fermented foods, mainly dairy, sauerkraut, cured meats. Eliminated sugar, took up cycling to work to get fit. I've not been ill since, my bowels are moving, and piles are gone. Not the most pleasant story, but thought I'd share it.


Thanks for sharing your story @sparkie. Doesn't sound like a whole of fun, what you went through.

Do you know of any high-fiber low-calorie non-meat, non-diary foods? Also, does the %age of soluble vs insoluble fiber matter, or no?


Various beans, sweet potatoes, yams, most root vegetables. Sweet potatoes became my favourite food when I realised you could use them as a sugar replacement for many kinds of baking, which made giving up the sugar so much easier.

As for % of soluble and insolubale, I've never really paid attention to it. I probably eat far more soluble fiber than insoluble though, if it helps.


Not proven, but

Eat Fermented foods, lots of fiber, more veg, occasional fatty meats. Eliminate sugar, reduce carbs, exercise.

Edit: I should probably add: Slowly introduce the fermented foods. If you're not used to them, might cause a few visits to the toilet if you have too much.


It's interesting when you look at the temperature record that the data sources are NASA/GISS, but the record goes back way further than these organisations have existed. So we ask, where does the data come from?

The bulk of the earlier record is from standard weather stations enclosed in stevenson screens, which are dotted around the globe in cities and remote areas. The number of these has varied over the timeframe shown here, as some have been abandoned over the years, and the nations responsible for them have changed.

A first, obvious question one might ask is: Where can I get the raw data for all of these weather stations since the recordkeeping began? To which one might suspect there's probably a freely available corpus for all climate scientists to look at.

You'd think completely wrong though. There is no data. The data from these stations is aggregated in national weather organizations, who publish only an "average" temperature. Most of the raw data archive is lost. What data you can actually acquire (perhaps you can get more if you have 'credentials'), is rather limiting, and vague as to what it represents.

This is the main reason I look at climate science with a pinch of salt. All of the scaremongering depends on this so called temperature record dating back to 1880, but they're comparing apples and oranges. In any science experiment, you take the variable you wish to measure and attempt to keep everything else constant. In climate science, the means of measuring surface temperature is allowed to be variable - it's changed significantly since 1880. The recent record uses satellite imagery in addition to the surface measurements for example. Still, climate scientists are treating it as though the data they have is from a fixed number of weather stations in fixed locations, where any human activity around them can be considered to have no effect. This reality doesn't exist.

What is really behind this data of NASAs is "an adjusted view of surface temperature from 1880 to 2014", where the adjustments are made based on hundreds of theories and climate models proposed by many other climate scientists (albeit in peer reviewed works). I want to believe that the state of AGW is as dire as it is made out to be, but until I have the raw data at hands reach for any of these publications, and we can start comparing oranges to oranges, I remain sceptic.


I'm sceptical that you've ever actually tried to get any data. The first sentence of NASA's GISTEMP page tells you where they get there data, and a few more clicks will get you to daily logs for a decent chunks of it.


Sparkie also doesn't understand that raw data, especially over long timescales where collecting methods have changed, is messy stuff. It has to be cleaned up to be useful, and the people who are best qualified to do that are, gosh, climate scientists. Temperature dataset papers are routinely published and any odd assumptions challenged by other people knowledgeable in the field.

In short, demanding "raw data" is a shining example of the Dunning-Kruger effect.


I disagree extensively - raw data should definitely be available, and there are a number of interesting things you can do with it without any special expertise. For one, evaluating whether those adjustments even affect any overall conclusions you are concerned about, like this:


If there is incompetence at play here, it's in failing to get ahold of raw data.


It's funny how the reaction of most people in this thread is to bash on 'the other sciences' for bad coding practice, while completely ignoring how inadequate their own practices are in creating reproducible programs.

The heart of the problem is described in the article: It's point and click interfaces (and yes, this includes regurgitating out commands into the terminal), which are expected to be followed by the dot, where they could be automated by the machine if the program were ever to be completed to be reproducible.

A big problem is a non-computer scientist is probably working in a lab on some ancient machine running ancient software - he has no control over the machine, and the system-admin is so far behind because his main job is to reproduce the (unreproducible) software written by so called 'computer scientists'. He struggles so much that he has to share his work with thousands of others as a 'package maintainer', and is grateful that so many other package maintainers exists, because without eachother, they would all have absolutely zero chance of reproducing anything.

It's time to stop bashing the coding practices of other people folks, and look in the mirror. We are the friction that causes code in scientific research to be unreproducible - it's not the code itself. If we're to educate non-computer scientists in how to create reproducible research, surely the absolute minimum is that we do so ourselves.

And so far, Nix and Guix are the only two projects (afaik) which are seriously attempting to tackle this. If you call yourself a 'computer scientist', and you regularly write research (which all code is), then start living up to the name and make it reproducible. This means you should be using Nix or Guix, and packaging your software for it. Without such tool to reproduce the software, you're suffering from the same reproducibility problems this article is highlighting about the other sciences.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact