Hacker News new | past | comments | ask | show | jobs | submit login
Add opt-in transparent telemetry to Go toolchain (github.com/golang)
110 points by gus_leonel on May 17, 2023 | hide | past | favorite | 119 comments



Very relieved that they chose to back off the initial opt-out proposal. It’s always refreshing to see a language listening to its user base. This was the kind of decision that could instantly change the reputation of a PL, for purely political / psychological reasons.


It's a classic case of Overton Window

They get users accustomed to the idea that now telemetry is in the toolchain. Next step will be to "accidentally" turn it on for everyone.

Then it will be "oopsie daisy" everyone is okay? See. Nothing happened, so we'll leave it opt out.


This implies some sort of malicious intent on the part of the Go maintainers, which doesn't really seem fair given the way the original proposal was written and how feedback was incorporated.


> This implies some sort of malicious intent on the part of the Go maintainers

It doesn't really, not necessarily anyway.

Think of all the awful terrible laws that get introduced "for the children". While in politics there are certainly a lot more malicious actors than in development, there are certainly many people who truly believe they are doing good "for the children" and they simply don't stop and think what are the drawbacks of their proposals.

I think it's just human nature to look for solutions to improve things which they are involved in (which is great) without (often) giving enough weight to how it harms other people (which is not good). That's why it's important to push back against these things, whether in software or in law.


I very much doubt that everyone would be okay with them "accidentally" turning telemetry on by default while continuing to say that it's opt-in. In fact, it would be very damaging for the Go project's reputation, especially since everyone already associates Google with "spying on users"...


The vast majority of folks downvoted the initial proposal and they're still moving forward with it (albeit as opt-in now), so I don't think everyone's opinion really matters here. Go isn't a democracy, for better or for worse.


Opt-in is a huge concession, especially since it means they will lose a lot of the information they would have gotten had they kept it opt-in. It makes built-in telemetry more palatable, because it gives the users a choice.

In contrast with how Homebrew implemented their opt-out telemetry, opinions obviously did matter to Go. Yes, I'm still salty about that project being so tone deaf.


There is an uneven distribution of power here that allows someone to write a blog post, create a very poorly received proposal, and have it be accepted after a concession, all over the course of a few months. Compare that to some of the other proposals with a ton of backing that have been frozen for years (like https://github.com/golang/go/issues/49085) due to the core team disagreeing with the direction.

Basically, there's a very obvious in group that has to pay lip service to the proletariat but otherwise can do whatever they please.


That's fairly true of many (possibly most?) programming languages with more than 100 users.

You can always fork an open-source implementation, but that's a lot less useful when it means your code is no longer compiler-compatible with other people's code. So languages tend to be run by some committee, and a lot of the popular ones have a Benevolent Dictator for Life or a small committee.

Guido was Benevolent Dictator for Life of Python until 2018. C++ is standardized by JTC1/SC22/WG21 (and Microsoft still has huge influence on it based on simply whether they decide or not to incorporate a feature into MSVC). Ruby is an ISO standard. Common LISP is an ANSI standard. Modifying a widely-adopted language in a way that will be seen by most users is at least as much "Can you work with those with the political influence to decide 'yes'" as "Is your recommended modification technically good?"


That sounds like our old nemesis: politics. Who holds control and how they exert that control is a very human issue that exists for every project in existence (modulo an outlier or two).

Telemetry is valuable for making decisions and identifying issues, so it's no surprise the group making decisions about the future of the project would welcome the proposal.


I think it is very cheritable to believe people on Google's payroll use spyware to "identify issues".


i read that thread, and TBH it looks like the issue was indeed rejected on pragmatic reasons.

Coming from a language that is slowly but surely trending to an "everything and the kitchen sink" PL (aka swift), believe me, i appreciate A LOT the care with which go team makes sure any new feature composes perfectly with the existing state of the language and doesn't add too much complexity.


Your response perfectly illustrates how Overton Window is working in practice. Remarkable.


There is a sizeable group of people - of whom I am one - who accept that software telemetry is incredibly valuable for making decisions about the future of a project. They simply want the use of it to be their choice.

Continuing to hold this opinion does not imply that their opinions have somehow been manipulated or changed.

As for the Overton Window - telemetry has been a part of software since the internet was a thing. And it will continue to be a thing long into the future. I just fight to keep it as a choice for the user.


> As for the Overton Window - telemetry has been a part of software since the internet was a thing.

Not true. As I mentioned in a separate comment, in the 90s (and even somewhat into the early part of 00s) adding any kind of phone-home functionality to code was pretty much universally seen as outrageous privacy violation.

Go back to, say, year 2000 and post on a technical newsgroup about having noticed some code phoning home and watch the outrage fly. This was seen as a line that must absolutely never be crossed.

So yes, the overton window has done a massive shift.


> in the 90s (and even somewhat into the early part of 00s) adding any kind of phone-home functionality to code was pretty much universally seen as outrageous privacy violation.

Yep.

And I remember a few very heated fights between dev teams and marketing teams over it. Devs universally fought adding telemetry because it's an abuse of users and user trust. Marketing universally fought for adding it because it allowed them to sell more efficiently.


Hate to break it to you but it wasn't ever considered outrageous by anyone except a tiny minority of extremely loud geeks who knew how to use the right forums and right rhetoric to have disproportionate impact on other fellow geeks.

The 90s-early 2000s were the era where the internet exploded and the dev community decided en-masse to start shipping software as web apps. Web apps are notable for giving massive streams of 'telemetry' to the web server operators as a natural consequence of how they work. Nobody cared and the change was embraced because the value of those server logs was so high that web-based SaaS companies could easily outcompete people steeped in client-side "your mouse clicks are private" culture.

Eventually, web culture had to be forced on the desktop people. At Google they were distributing desktop apps and the command came from the very top (Larry Page) that Google's software updates should work just like the web. Silent, background, no end user control, no confirmation popups. Stuff just updates. Of course, huge uproar. End users should be in control of their computer etc. Nothing worked like that at the time. Page insisted: do it or you're fired, so it got done. Users loved it, that model turned out to be extremely successful and became a key competitive advantage for Chrome when it launched, one that others have since copied.

The Overton Window shifted because the window only existed at all in programming culture. It took confident and in-control CEOs like Page and Jobs to kick developers out of their ideological cubby-hole and into a place that better suited the huge majority of end users, who really couldn't care less if some guy they never met knows how many seconds they spent looking at the welcome screen. But they do care a lot about whether their tech actually works reliably.


By accepting telemetry as a standard practice, it won't be a choice soon.


It is as long as I have a firewall.


"They changed the proposal based on feedback but no one's opinion mattered"

Ehhh...


> The vast majority of folks downvoted the initial proposal

And the vast majority upvoted this proposal.


This will be the case. They're just moving the goal post to a safe distance for now.


This is a very unfair and unfounded criticism of the Go maintainers.


> This was the kind of decision that could instantly change the reputation of a PL, for purely political / psychological reasons.

Go already has a bad reputation for political reasons.


care to give more details ? I'm not sure what you're thinking about.


I think adding telemetry to a compiler goes in the opposite direction of the current "minimal privileges, sandbox all the things" trend.

For instance, it would make sense to create a set of selinux rules (or something like it) to make sure that a compiler cannot do anything other than reading its input files (and system headers/libraries/etc) and writing to its output directory, even if for instance a buffer overflow triggered by a malicious source code file led to running shell code within the compiler. Having to allow access to the network for the telemetry would require weakening these rules.

It reminds me of the classic "confused deputy" article (https://css.csail.mit.edu/6.858/2015/readings/confused-deput...), which coincidentally also involved a compiler tracking statistics about its usage.


That would actually already be difficult with the current go tool, since it's more than "just" a compiler but also fetches dependencies. If all dependencies are already in place it won't hit the network, so there are options, but you'd have to find another way to retrieve those.

The telemetry is opt-in,and failing to send them won't fail the compile (it won't even run on every compile). It's not really preventing you from applying your SELinux policy if you want, even if it would have been opt-out.


> to make sure that a compiler cannot do anything other than reading its input files (and system headers/libraries/etc)

Define "input files." Tools have to do a combination of reading, parsing, and sometimes even version unification/downloads just to get the complete set of inputs to feed to the compiler.

Of course you can define the compiler as the tool that parses text and writes machine code, but then you're just shoveling dirty water around.


Can someone explain how we managed to have programming languages and toolchain development for half a century without using telemetry, but somehow we need it today in our tools? Their Why Telemetry? blog, just doesn't cut it for me [1].

[1] https://research.swtch.com/telemetry-intro


Humans managed to live for most of history without penicillin, or even boiling water, at the cost of most humans dying before making it to adolescence. People managed to have global communications with only steam ships and telegraph, at the cost of slower pace of information dissemination. NASA managed to make it to the moon with less computing power than the cellphone in your pocket, at high resource, monetary, human and time costs. Cars managed to work with more rudimentary design than today's, without any computers, at the cost of lower life-spans, lower efficiency and higher pollution.

You can make many arguments for and against telemetry in developer tools. Not acknowledging that telemetry helps with visibility into how those tools actually work in the wild, which in turn helps lower the incidence of bugs and speed up development, is disingenuous. You can arrive to the conclusion that even inert, opt-in telemetry is not worth it, but don't disregard out of hand the utility of it in helping their development as if it were some crazy idea.


And yet when NASA went to the moon, they had telemetry data.


> And yet when NASA went to the moon, they had telemetry data.

NASA couldn't obtain that same data locally, they had to know how the vehicle and software behaved in the real situation. NASA's telemetry also ran on their own hardware, not on arbitrary users'.

Contrast that to Go, which is used in real situations internally at Google. They don't need to spy on their users, they have first hand experience using the tool.


> Contrast that to Go, which is used in real situations internally at Google. They don't need to spy on their users, they have first hand experience using the tool.

And that’s a big disadvantage if I were to scale the usage of Go then I would say startups and middle orgs occupies like 80% if the Go team work according to logic at google scale, Go wouldn’t be this successful


One of the top complaints about Go is that it over-indexes on the quirky way Google does things, so collecting telemetry only from themselves would seem to have the same problem. It is reasonable to want an unbiased sample of the userbase.


You can say "we managed to do X without Y" for a lot of values of X and Y.

I think that's the wrong way to go about things; instead it's more useful to ask "will this be useful?"

There's a long list of real-world use cases in part 3 of that blog series.

I miss telemetry in my app sometimes too; there's some features where I wonder if anyone actually uses this, and I also don't really know what kind of things people run in to. Simply "ask people" is tricky, as I don't really have a way to contact everyone who cloned my git repo, and in general most people tend to be conservative in reporting feedback. I have found this a problem even in a company setting with internal software: people would tell me issues they've been frustrated at for months over beers in the pub, when this was sometimes just a simple 5 minute tweak that I would be happy to make.

Can I make my app without telemetry? Obviously, yes. And I have no plans to ever add it. But that doesn't mean it's not useful.


> instead it's more useful to ask "will this be useful?"

Well, that's also the wrong way to look at it. Because everything, no matter how broadly bad it might be, is useful to someone somewhere.

Of course telemetry can be useful to the developer of the application (if they look at the data and act on it). But at the same time it violates the privacy of all its users, who vastly outnumber (at least for most projects) its developers.

For any argument we need to look at pros & cons, not just the pros.


I think that should go without saying, but yes, obviously you're correct that "useful" involves arguments about advantages and disadvantage. The previous poster wasn't talking about trade-offs though.


The vast, vast majority of people simply do not care about this type of privacy (from absolute strangers with no details about their personal lives). It's a nerd canard that this type of data sharing matters.

Android, iOS, Windows, macOS, Chrome, games consoles, Docker, VS Code, IntelliJ etc. They all collect telemetry and stats on how they are used. That didn't stop them becoming monster success stories even amongst the developer population because nobody cares.

The Go team are right to do this. Really, we should all be following their lead. Software telemetry is essentially pure win with no downsides for end users, which is why everyone has adopted it. It can also be done in better ways than we do now, like by recording stats in human readable form in files that sit around for a while before they get uploaded, so uploads can be turned to manual mode for inspection by the 0.1% of people who do seriously care.


> I think that's the wrong way to go about things; instead it's more useful to ask "will this be useful?"

I think that's the wrong question as well. The right question is "does this provide benefits in excess of the costs"?


Every new feature starts at -100 points and must have at least 100 points to be added to the list of features.

You should only be adding telemetry if you can verifiably prove it will give you information you can't get otherwise, that you will actually use that info (what features hit the most bugs doesn't freaking matter if you are spending 95% of ever sprint doing completely different things) and only if you can find a way to legally guarantee that info is used for NOTHING else.

Microsoft did not have broad telemetry in Windows in the 90s, and yet Raymend Chen had no trouble getting popular software and running it to find out what problems it ran in to. When vista had basic telemetry, all they found out was that nVidia makes crash prone drivers (50% of all Vista BSODs) but that didn't help them at all. People still blamed Vista for all the problems not caused by Vista, and nVidia was not getting that telemetry.

Telemetry is a weird crutch that people keep latching onto before they even break their legs.


You can manage to have things without telemetry, but having telemetry is incredibly useful. I think the example of "how much of our user base actually uses these features" is a very good one, specially in a compiler where maintaining old features could be adding a lot of complexity to the code base. And, as they also explain, a lot of bugs and undesired behaviors are things that the users won't know they have to report and just accept as part of the normal behavior. Things like cache misses, slowed down compilation times in certain situations, sporadic crashes... All of those things could be improved if the developers knew about them.


A common concern I see is people being worried that the reaction to having "how much of our user base actually uses these features" answered will be to remove the feature entirely, when in practice it is more of "we think implementation A of the feature is no longer in use and we have migrated everyone to implementation B, is that the case?" or "we have implementation A produce user visible effects, while we have implementation B run parallel to it to detect divergences, have any been detected in the wild in the past X months?".

In Rust we've had large migrations like that (the "new" borrow checker comes to mind) and we had a really long periods of time where they were tested against the latest crate versions on crates.io until crater came back clean, and even then we had bug reports about regressions in the wild only after it was released on stable.

For me personally there's one big blind spot when testing only against published code, no matter how big the corpus is: humans are excellent fuzzers and the malformed code they write and try to compile is hard to replicate. Having visibility into uncommon cases that are only visible on users machines would be incredibly useful. An example of this could be the "botched" 1.52.0 Rust release[1], where invalid incremental compilations were changed from silent to visible Internal Compiler Errors in nightly for several releases to the point where the team felt all of the outstanding incr comp bugs related to them had been addressed, but when turned on in stable immediately hit users in the real world, making it necessary to do an emergency dot-release reverting that change. If we had telemetry on stable compilers, instead of turning on the silent to ICE change we could have added a metric for the silent error and know ahead of time that the feature wasn't ready for prime time. With more telemetry we could have known what was causing this. Snooping over a user's shoulder while they hit the case could make identifying the bug trivial, but of course then that user would then turn around and ask me in unfriendly terms "who are you and how did you get in?"

Arguments can be made against even implementing telemetry, and they can be compelling enough to elect against it, but shouting down the conversation from even happening is not helpful.

[1]: https://blog.rust-lang.org/2021/05/10/Rust-1.52.1.html


> A common concern I see is people being worried that the reaction to having "how much of our user base actually uses these features" answered will be to remove the feature entirely

That's probably because it's happened in real life quite a lot.


Among the telemetry-collecting applications and websites I use, or have used, I can't think of a single case where the software obviously improved due to this data being collected.

In fact, it seems to be the opposite in some cases. Firefox and Windows, for example, have generally become significantly worse for me over time, despite the telemetry that they're collecting.

In the "best" case, software like Visual Studio Code and Homebrew have merely remained mediocre.

I've seen much better results from developers who base decisions on feedback and bug reports that have been manually submitted by users, rather than trying to make assumptions based on automatically-collected telemetry data.


Well, precisely the case for telemetry is to make improvements that aren’t obvious to the users. If they were obvious people would see them and send bug reports.

I have used telemetry for mobile apps in the past and now at work to monitor performance of certain software we deploy in client data centers, and there’s a lot of things I notice in the telemetry that users wouldn’t find and report. For example, I remember I was able to fix an issue in a mobile app because I noticed startup times increasing each time users opened the app, and I could fix the problematic cache quickly. I bet most users didn’t really notice. Same at my current job, we’re able to detect slowdowns and processing bottlenecks that for the users just show as subtly erroneous data. Do they notice those fixes? Nope, but that’s the point, I want to be able to fix things before they notice and report them.


Telemetry is incredibly useful for ensuring that people don't use the product.

Many likely be doing a lot of switching away from Go intentionally, due to this.


> Can someone explain how we managed to have programming languages and toolchain development for half a century without using telemetry, but somehow we need it today in our tools?

Back in the 90s in companies I worked for, attempting to add any kind of phone-home code was a fireable offense. Or at least, would get you a very stiff talking to from a few very high up people in the organization. You'd never even think of doing that again. Customer trust and privacy was paramount.

As we all know, spyware slowly started creeping into end user apps and later became a flood. Now it's difficult to find any consumer app that doesn't continuously leak everything the user does.

It's become so normalized that now even developers tools seem to think it's somehow ok to leak user data.


I think there is a whole generation of developers who have no experience with how to do these things in the absence of telemetry, so they genuinely believe it's not possible.


Two things happened in '00s in relation to this question. One, on demand computing infrastructure (cloud), and two, the scaling requirements of a new breed of networked services. The germinal change in response to these shifts was that processes replaced components, and system boundaries spanned devices.

When your code is running on application servers and your applications are composed of components, all the tools were already there, in the OS and as add ons, like dtrace, and in whatever monitoring tools came with your application server. Today, instead of components, we compose systems out of (lightweight) processes, and processes can be created on any device, and the replacement for the application server is the whole gamut of k8, terraform, elastic, ..., etc.

Nothing has changed in the abstract structure of our systems, its just that the current approach has the beast dismembered and spread out and loosely connected via protocols, instead of a linker or a dispatch mechanism of a platform.


It'll be interesting to see what effect telemetry has on the ongoing development of the tools and language, since we don't have another tool / language chain to compare it to.


You can somewhat compare to dotnet where it has opt-out telemetry.


I like that they changed their approach to opt-in. I also like how much effort they've put into making the data collection as anonymous as possible despite being a Google project.

Well done, golang team. Other companies with supposedly open languages (looking at you, Microsoft) can learn a thing or two from you.


Given how happy people seem to be sending large parts of their codebases to LLMs these days, privacy concerns over telemetry logging look quaint in comparison.


Programs running on your machine have access to much more than the code they are working with.


The actions of a few shouldn't be taken as representative of the whole.


Google doing more spying, unsurprising. At least it's opt-in (for now)


As long as they keep this opt-in, I see no reason to accuse them if anything. The telemetry collection design is clearly made to be as privacy preserving as possible.

There's always the risk that they'll roll out the telemetry setup now as an opt-in feature and then switch it to opt-out down the line, but I don't think this is the current team's intention.


It'll be interesting to see how this is accepted by the community at large. I think explicit opt-in, combined with having the discussion about which metrics to collect in public, would be enough assurance for most that this isn't a bad idea - but doesn't neccesarily mean that most will opt-in as a result.


That they made it opt-in means that I will no longer completely rule out using golang. I don't know if I'd actually opt in, though. I haven't evaluated that issue.


I might be wrong, but isn't dotnet telemetry opt-out by default?


I'm fairly accustomed now to disabling this shite, but if it's only going to become more prevalent I could see it creating real toil. Are we also going to need a PiHole-like solution for servers and dev boxes?


It's opt-in.


> IP addresses exposed by the HTTP session that uploads the report are not recorded with the reports.

Like pinky promise, trust me bro.

This whole thing looks delusional. I hope someone is going to create a fork as Golang team has lost their marbles.


There aren't many orgs and people I would trust over this, but the go team gets the benefit of the doubt. They've proven that they're serious.


What is the point of a fork? Just don't opt in.


That answer it akin to "just use VPN" every time there is announcement of some kind of censorship of the web.

The point is, this is just introducing another security threat to worry about. You now have to be aware that the toolchain can call home and ensure it doesn't happen. Someone might misconfigure it or you don't know if a release comes that has a "bug" and it sends telemetry regardless of settings.

This is just completely wrong and should be nipped in the bud.

The mere fact that they are pressing ahead with this is sinister. These things are never about "oh just don't opt in".

If someone really feels the need to send telemetry to Google and be spied on, they should use a completely separate tool that is not included in the toolchain.


Every program can call home, if it doesn't do what its author says it does. This is not specific to compiler toolchains, or this toolchain, and is not new.


aka "everything can happen" straw man. There is a difference between anything might happen in the future vs something specific is going to happen.


That's exactly my point. The Go developers promise that it doesn't call home (unless you opt in), just like every other developer promise that their program doesn't call home.

If you don't trust developers' promises, why is one of those worse?


(Maybe I'm just cranky this morning, so please ignore this comment if it fails to please you, but uh... I think the "real WTF" is getting into a position where you need telemetry at all. It's hard to describe what I mean, it seems like most folks who think about these things at all either get it or don't, and the two positions are so self-evident to those that hold them that it blinds us to the rationale of the "other side". In any event, "compiler does not access network" Works For Me. Sorry for the noise.)


Yeah; I can’t imagine how terrible golang’s test infrastructure would have to be for this to provide them with any practical benefit.

Their motivating example was something like “golang stopped working on clean macos, and no one noticed for months”, which kind of proves my point.


I don't think they need telemetry. Go is clearly a successful language and community without it.

The whole point of the proposal is that things could be improved with some telemetry.

Now of course you could weigh the tradeoffs and say the potential for improvement doesn't outweigh the risk of misuse, but it seems clear to me that reasonable people can disagree.


I really don't want to argue about this (so I don't know why I'm arguing about it. Like I said, I think I'm just cranky this morning.)

> I don't think they need telemetry. Go is clearly a successful language and community without it.

Right! So why "pee in the soup"?

> things could be improved with some telemetry

Or they could not do those things? (I read "Why Telemetry?" and remain unmoved.)

> you could weigh the tradeoffs

I have.

> the potential for improvement doesn't outweigh the risk of misuse

That's not my argument. It's not even misuse that I care about here (I do care about that, but that's a separate concern, and one that doesn't arise if you don't collect data in the first place, eh?), I care about use. I don't want my compiler to make network connections to Google or anybody, for any reason.

> reasonable people can disagree

That's what I said.


When reading the title, I was hoping it would mean better abstractions than context for passing around OpenTelemetry info in a golang codebase


It's a real shame libraries like OpenTelemetry encourage using the context to pass information around. In my programs I prefer to be explicit. Sadly this breaks so many libraries which also depends on having a magically configured context to function properly. It's thread local storage all over again.


The paradox of "thread local storage" is that we both really, really need it, and it blows up if you have it.

Contexts are at least a value you can see and manipulate rather than having something attached to your thread, and in particular, can pass across thread boundaries if you need to.

One way or another, you end up needing some sort of scope that carries values that can't be strongly typed because the way you're composing those values doesn't work with any known strong type system in practical use. (I've seen some super theoretical ones that can in theory do it but I've never seen them brushed up into something practical and successful.)


Contexts are also immutable where it matters, which helps.


I like explicit but I don't want to do a whole lot of extra typing either. What I'd really like to see if the type system automatically infer the type of something like the context object implicitly.


Hmm, my editor is able to autocomplete this. Not yours?


I assume you mean that your editor is able to autocomplete that something is a context.Context.

I assume what the GP means is to infer a strong type for a struct that has all the data a context has in it. This is much harder, for many reasons. A context that comes in from one path may have a RequestSource in it, but another may not; the resulting type of the function is rather complicated to infer. We prefer not to have two different functions as a result. There's also the problem that such inferences end up strongly tying types across many functions together, such that up in some middleware for your web site you add a new value into the context, and if there was a strong type for that context, that strong type would cascade throughout everything that could someday possibly touch that context. The result is much like checked exceptions in Java.

This is perhaps the hardest practical type problem I know. It seems to me to be very related to a similar problem, which is that of trying to strongly type errors. The sort of cross-scope type inference we envision in our heads is extremely unwieldy in practice, if you take the time to try to scope out what that would convert to in a real program with many nested scopes and arbitrarily complicated paths in to those scopes, plus arbitrarily complicated closures being passed around that further impact the types.


Oh right! I had misunderstood.


Prop drilling specific telemetry everywhere in the ecosystem seems much more painful in comparison.

C# is pretty neat in having an AsyncLocal[0] that goes beyond what thread locals enable

[0]: https://vainolo.com/2022/02/23/storing-context-data-in-c-usi...


yeah it's pretty dang handy :) As long as it's not abused or overused.


How is telemetry so valuable that language maintainers feel the need to introduce it even when it is so controversial?


When you’re in organization with strict goals and no resource for public tests implementation you’ll have to make it up for correct information


In what world is Google strapped for resources?


I think you’re overestimating the level of endurance ( google’s money) have on open source. Firstly Google as an Organization needs revenue to support go’s team expenses ( and I wouldn’t be surprised to them if they cut some cash because of OpenAI and MSC threat), open source teams especially in big corporations IMO have limits to resources dedicated especially if the project can’t be placed at the “platform level”, example take android source projects ideally I see no difference how both teams project their will in the direction on where the project will go but google must have a greater influence on android projects because they’re dependent on it seriously (basically 75% of revenue comes there), so what does google that depends on Go?, am pretty sure Go’s codebase hasn’t reached 7% or 10% at most, discontinue Go ?, fine they dedicate few team specific for their use case or just Rewrite it


Every Problem have a Solution, telemetry is good but abuse of anything is bad, if this follows strictly inline to what they have planned just like what they have been doing, I see no problem. Not everyone must be satisfied


Good on them! That's how you do telemetry without it becoming spying.


In general, I have no objections to well explained opt-in anything.


Is it in the toolchain or in the applications made with Go?



Toolchain


I will be the one overweighted in the report because I keep typing go mod init and go work init...

I don't know I never actually remember the commands. :o)


When did logging and reporting become "telemetry"?


Logging and reporting is generally for the benefit of the end user.

Telemetry is generally for the benefit of the marketing team, law enforcement and development team.


When was it not? Telemetry - the act of collecting logs/"metrics" and reporting them back to a remote (tele) station - how could this not be considered telemetry?


It has only been called that recently in the context of software.

Edit: have been doing software since the mid-nineties. You can downvote all you want, but this is a more recent usage.


Actually the name for it previously was spyware or the act of spying. The term telemetry is/was more neutral, but it also has a bad reputation by now (euphemism treadmill).


It's called "spyware" when you don't trust the collector to be acting in good faith.


Used to be that transmitting data about what the user's doing without asking for permission was "spyware" automatically—that it was happening behind your back was already a sign of bad faith.

The opt-in thing is fine, but some of us are still stuck, I guess, on older standards for software ethics, and find it entirely unacceptable and alarming that opt-out was ever proposed in the first place, about as bad as if they'd proposed adding an opt-out bitcoin miner to it to help fund the project—it's disturbing they'd consider that OK to even propose.


When developers started to log on my computer and report that data to their servers.


When did it stop being spyware?

(some time in the back half of the '00s, I reckon)


What does telemetry mean?


It's accurate in that it's the "remote collection and transmission of data", but it's a term traditionally associated with aerospace. Software has been doing this for a long time, but only recently have people started calling it "telemetry".


It's not just an aerospace term. It's a general term dating back centuries across dozens of different industries.

https://en.wikipedia.org/wiki/Telemetry

Look at that list under Applications. What makes software so different that it shouldn't be included?


For my part: it didn't used to be called that, and feels like software folks trying to feel fancier than they are (and/or whitewashing bad practices by pushing new-to-the-industry terminology that makes spying sound scientific)

Though at this point it's been widespread long enough that I suppose it's just a normal term that's "always been used", to some developers, not a new, alien-feeling part of the software lexicon.


I think the word telemetry is fine. It's concise, accurate, and not emotionally loaded in either direction.

Microsoft OTOH calls their data collection the "Customer Experience Improvement Program" - now THAT is whitewashing.

It's only been a decade or so now that you could rely on most computers always having an internet connection. That's probably why the term feels new - it only started being used when the practice became technically feasible. Maybe others remember things differently. /shrug


> feels like software folks trying to feel fancier than they are

Like neural networks?


I suppose the ur-example in programming would be "dynamic programming".

https://en.wikipedia.org/wiki/Dynamic_programming#History

Sounds super-fancy and advanced, but when you dig in it's like, "oh, that's all?"


I was one of the more vocal opponents of opt-in telemetry in the original discussion. One thing that seems clear to me coming out of that exchange is that the Go devs are quite anxious to get good-quality data, and are concerned that not enough people will opt in.

I for one plan to enable it, and I hope others will do the same.


Tangential topic, but if Golang had any way to have constant pointers and frozen variables, the language would be my favorite.


That sounds more unrelated than tangential.


IOW, it's a 90 degree tangent?


A language without operator overloading is non-starter.


Funny, for me it’s the other way around. Ain’t no way I’m going back to the land of C++ operator abuse, where ‘+’ doesn’t even mean ‘+’ anymore.


Without operator overloading, you cannot define arithmetic data types that have a homogenous API with fundamental types. Since math is.. basically all that computers do aside from some IO here and there, that's means a language without operator overloading is a domain-specific language. Curiously, none of these languages that lack operator overloading come close to covering all of the common use cases for math, except perhaps Odin.


A language without user-defined operators is a non-starter. ;)


For which use case?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: