Hacker News new | comments | show | ask | jobs | submit | sitkack's comments login

The algorithm needs a proof and the implementation needs rigorous testing (Jepsen).

Justice was hacked a hellava long time ago.

  [citation needed]

Ca. 1996, the DOJ web server was hacked by somebody. At this point, I don't remember the details--Adolf Hitler as AG, a naughty picture or two. It shouldn't be hard to find.

That makes for nice headlines, but anyone who actually understands this stuff knows that hacking the public-facing web server is not a big deal and not really related to obtaining private info like this.

Edit: I just remembered, there is (of course) a relevant xkcd for this: https://xkcd.com/932/


Actually, he is responsible for graphics cards having to ship OpenGL drivers. Without Quake, MS would have probably been successful in killing OpenGL on the desktop.

MiniGL actually.

In the age of APIs like Glide.

Even if Microsoft cared about OpenGL, the API never had any love from console and graphic card vendors in terms of tooling and support for game developers.


Many farmer's markets in Eastern Europe are like this.

http://doc.rust-lang.org/reference.html#appendix-influences

Parallel is of course very important. But if serial speed isn't in the same range, go parallel will be the same perf crutch as the Python folks who say drop to native. Rust programs shouldn't be parallel to beat a single threaded C++ program.

>Rust programs shouldn't be parallel to beat a single threaded C++ program.

I don't write code in either language, but as an outsider my impression is that rust was created to make programming for parallel execution, easier.

If that's the case, why are we comparing a use case c++ is optimized for against a use case rust isn't optimized for?


We do want Rust to be excellent at parallel execution, but that does not mean that we don't pay attention to single-threaded performance either. The way that we make paralell/concurrent code better has no negative impact on single-threaded performance. In fact, sometimes you can use more efficient data structures when you know that you're not using multiple threads, like Arc<T> and Rc<T> for example.

Exactly. Most C++ codebases lean too heavily on `shared_ptr` because they don't/can't know when something will cross a thread boundary.

Rust wasn't created to make programming for parallel execution "easier". What Rust does is make writing implicitly unsafe parrellel code impossible to write.

This is a good thing, but is orthogonal to single threaded performance (which from what I've read, the Rust team definitely cares about).


It's absolutely intended to make it easier. Easier in the sense for example, that you can create a multithreaded program that uses stack-allocated but shared data, and you don't have to debug complex synchronization yourself.

The rayon library is an exciting example of some of the possibilities http://smallcultfollowing.com/babysteps/blog/2015/12/18/rayo...

Rayon, being a multithreading library, is of course itself not so trivial to write. What it lives up to is that users of rayon can use the regular rust type system rules to ensure that their use of rayon is thread safe if it compiles.


I find rust is still a complex language. I don't find concurrency to be any easier to write using Rust over using the C++11 concurrency API, for example. I don't have much real experience with concurrent Rust applications, though.

What Rust does have is that once your code compiles in safe mode you're very confident.

Overall, I like the complexity and I have been enjoying my time with Rust. I began using for bare metal ARM programming. After using Rust in "no standard library mode", I'm now convinced there are no more valid reasons for using C in this century. To me, Rust is a no-brainier replacement C.

It's not however, going to effect the adoption of modern C++, which has a slightly higher level niche in performance critical applications. It's the libraries that make C++ awesome, definitely not the language itself (although the core is improving).


Is there a rayon replacement in C++ (lightweight tasks for parallelism with a lock-free work-stealing scheduler), other than Cilk itself, which isn't really C++?

Take a look at Intel thread building blocks. https://www.threadingbuildingblocks.org/tutorial-intel-tbb-t...

Also there is Microsoft REST SDK which includes task based programming. https://casablanca.codeplex.com/wikipage?title=Programming%2...

Also there is async++ https://github.com/Amanieu/asyncplusplus/blob/master/README.... which is an implementation of the proposed c++ concurrency TS.

In January 2016 the concurrency TS was accepted so we should start seeing implementations from the compiler vendors. https://gist.github.com/StephanTLavavej/996c41f7d3732c968ede


I have no idea, sorry.

This is excellent! Thanks Terry.

Devops everywhere celebrate.

Why?

If you've ever run tableau server, you know why.

I would very much like to know what particular issues you are having as well? I work at Tableau and I would like to make sure that issues people are seeing are on our radar, if they aren't already.

Thanks for asking. I just installed 9.0.4, and here are a few examples:

- Tableau Server runs only on Windows, so why can't it use a TLS certificate and key from the CryptoAPI certificate store, rather than requiring these to be converted to PEM format (with Unix line endings!) and saved in the file system?

In an enterprise with an internal CA using Active Directory Certificate Services, these extra steps have to be done not only at installation but also every time the certificate expires. Compare the experience with Microsoft IIS: the server automatically requests a renewal from AD CS, retrieves the new certificate, and begins using it.

- Tableau Server should be able to run as a Group Managed Service Account, so we can give it access to remote data sources without having to assign (and regularly change) yet another service account password.

- It would be helpful to have an scriptable installation process; as far as I can tell, there's no way to install Tableau Server without clicking through wizards.


Thanks for the input. I am going to forward these on to the server dev team and follow up with them in person. They may be aware of some of these already but it is important to us to keep track of what is causing our users the most headaches. I appreciate you taking the time and letting me know your suggestions and the issues you are having!

Ive done a Tableau Server rollout at a previous client, and iirc, we were easily able to run it as a managed service account. (Microsoft AD)

Any specific difficulties you faced with this..?


Having to program against its APIs recently:

1. No ability to use a 3rd party auth provider AFAIK, which means either keeping tableau passwords in a database or having users remember two different passwords

2. Embedded views use synchronous requests, which can easily hang the browser. Synchronous XMLHttpRequest has been deprecated for a while. I think I even saw a version of dojo from 2005 being loaded.

3. Reports are either static size or dynamic size, and unless you're using the (clunky but well documented) JS SDK, there's no way to tell.

4. Viewing reports in the browser is sloooooow. Browser console output is filled with warnings.

5. In order to put together sheets from multiple workbooks into a browser-based view, you need to either a) load the jssdk for each of the workbooks and query for sheets, which is extraordinarily slow, or b) do it with the REST api, authentication with which is asinine in nature (see #1).


> 1. No ability to use a 3rd party auth provider AFAIK, which means either keeping tableau passwords in a database or having users remember two different passwords

The answer is SAML/ADFS. You should look to enable this integration. If you are not using AD/LDAP, that's a whole different story. But SAML/ADFS is pretty much the standard way since Tableau is a Windows service, it is very natural to just use AD/LDAP/SAML.


Yeah, our auth is rubycas. No ad/ldap.

OpenID in 9.2 also

[1] I had to set the client-side map rendering threshold to a very high number (100000 I believe) to get maps to render at all. Server-side rendering doesn't work, even though it can contact the map servers and display all of the examples in the documentation (Miami/Havana I think?).

[2] It's been a few months, but I remember getting the license activated offline was a weird process. Something like, point tabadmin toward a license file, which generates a number or json or some other file, which you then paste into or point the UI toward, which gives you another file to use in tabadmin... and at the end tabadmin gave me error. Now when I go to "Manage Product Keys" it acts as though it is unregistered, but the server still starts without error (it did not before the failed activation ritual).

I do have a ticket in with support for [1]. Given how much of a bitch it was to activate (or half-activate) I'm reluctant to investigate [2] further.

Also, I'd like to see a linux server. Tableau is our only Windows server, which weighed heavily against the product when we were considering alternatives.


So, I am not on the server team specifically. A lot of these issues that are mentioned may already be in the pipeline/on our radar. However, I think it is beneficial to make sure that we continually follow up to ensure the squeaky wheel gets the grease, so to speak.

All of these issues mentioned here will be sent to the server product owners and managers. :)

I am, however, on the maps team. I am curious about [1] above. I'll see what I can find internally on this. I am rather curious since this isn't something I have seen.


Offline license signing is a solved problem, Sophos for one has figured this out with the way they license their UTM product.

When they give you a license file, it's cryptographically signed with their GPG key, and the public key resides on the appliance for verification. All you have to do is get that license into the system, either by USB key, typing it in yourself in Vim, or simply uploading the license file in the webUI if you have access to it.


Trusted Authentication is a poor solution to the problem of how I can embed views in my web app without having the end users of my web app have Tableau server accounts. For the following reasons:

- I have to explicitly add each server IP address. I have no way to trust an entire subnet or range of addresses. This is a huge problem in an auto-scaling app server environment where I don't know the IP addresses my app servers will have. It is a major annoyance to developers whose DHCP-assigned, dynamic IP addresses keep changing.

- There is no API for adding trusted IP addresses. It is a manual process.

- The Tableau server must be stopped and restarted to add new trusted IPs.


I'm elated to see that your responding here. Please please see my comment below: https://news.ycombinator.com/item?id=11043082

There is so much low hanging fruit, I feel like anything related to actually running and maintaining tableau is ignored and I don't seem to be the only one judging from comments here.

I would add that I'm disappointed the only way these issues get attention are articles and threads like this.


I recently had to perform updates, and the biggest aggravation was having to start and stop the server for every config change.

Also, lack of Sharepoint integration / ability to handle federated login services with OData connectors.

Minor issues though, I'm a huge Tableau fan.


Well, as some who doesn't can you enlighten me?

It takes an ungodly amount of resources to do not very much. It's very buggy, frequently the HTML charts time out or just break with no indication why. Upgrades require a lot of manual work. The javascript API is reasonably documented, but, again, buggy.

It's an unholy combination of rails and postgres somehow hacked to run on windows. Really, they should just ship a linux VM that runs these things decently.


I run a small one, with little effort other than what comes along with Windows - its our only Windows based system.

I would be interesting to know what the problems you have faced are.


My experiences are very different.

Many Linux services have a concept of reloading. If the config file changes you can send the running program a signal and it will re read the config. This is very useful for production systems.

Tableau (9 at least) has no such concept.

Change the email address it reports to? Restart tableau.

Change the location of the SSL certificates? Restart tableau.

Want to apply an update for tableau? Uninstall your current version and install the new one. Oh and until recently when you downloaded the installer for tableau server the file name didn't actually contain the version number.

This product was not designed with ops in mind at all.

Edit: I forgot, I've actually had a tableau server fill itself up with logs. Tableau has logs in many different locations outside of windows event viewer and doesn't include log rotation facilities for all if them.


That sounds like most Enterprise software.

It's like that because R&D and Operations never talk; and of course your average Windows Ops person has a poor understanding of operating systems.


Those things are indeed a pain with Tableau, and also common with a lot of Windows based applications.

Never understood why Tableau is either Windows only or has the restart to reconfig issue. Last I looked, it was largely a Tomcat and PostgreSQL based product.


[Replied to wrong comment]

What's the actual issue with restarts? Is the downtime due to a restart unacceptable? Is it that users lose application state when a restart happens?

Just trying to understand since I've written software with the same restart to config workflow and would like to understand what causes it to be problematic.


If you run tableau in an enterprise environment you will likely have a lot of c level executives, global sales teams and more relying on tableau to be available outside of your local business hours. This means any maintenance needs to be planned and communications sent out to all stakeholders.

If reloading was an option then there wouldn't be downtime, and I wouldn't need to schedule a maintenance window for something as simple as updating an email address. The idea being that if there is a config error during a reload, the system just continues uninterrupted with the original config. If I have to stop the system completely in order to run the config sanity checks when it starts again, the potential for prolonged downtime is much greater.


Thanks for that perspective, I hadn't thought of that.

Would a system that did something like an internal cut-over be useful? e.g. try to start a whole new instance of the application, if it loads, then let it become the running application, if not, write an error log and shutdown?

It would still lose all the state associated with the previous instance, e.g. user sessions, but would avoid this specific issue.

I agree that it's pretty silly that things like email addresses need a restart, but I'm wondering in general how bad this pattern is.


That would work IMO, yes.

An interrupted session isn't a big deal if it's an infrequent occurrence and they can just login again.

I think an improved solution in this vein would be a tool that would let you sanity check the config before reloading.


"tabadmin cleanup" helps a little.

But yeah, this thing is a mess.


I think a better question is what problems they haven't faced.

This is an extremely frustrating thread to follow. We have some people who have run a Tableau server with no issues, and now two people who are effectively saying "oh, it's awful but I have no interest in telling you why".

five people telling in detail horror stories that make every other devop feel the pain? (except those seasoned exchange-experts of course)

did you ever consider there might be valid reasons some folks prefer *nix based servers?


five people telling in detail horror stories that make every other devop feel the pain?

Check the timestamps of those messages vs mine. At the time there were only the two responses I mentioned.


you have a point there

I wouldn't even know where to start to be completely honest.

Thanks for your honesty, but I hope you understand that's completely useless. I might have preferred if you lied.

To be completely honest I've never even heard of Tableau.

> I think a better question is what problems they haven't faced.

Deploying visualizations without having to develop any HTML or JavaScript code.

Publishing to the desktop (Windows or Mac), to the web, to the cloud, or mobile devices (iOS and Android). Publish to the server once, consume on all supported platforms.

Deploying a copy of a current site for redundancy, testing or development. Install the app, backup the primary Tableau database with its admin utility (command line), restore it on the new box. All data, visualizations, users and permissions are contained in that single restore step.

Tableau means I spend time working with my data, instead of the presentation of it. Its not a perfect product by any measure, and could obviously use some improvements, but is a timesaver in many areas.


+1

It was pretty apparent in '08 that Obama wasn't going to _save_ anything. Obama, is, was and will always be a Statist. Everyone was exuberant for the removal of Bush and nothing more, the fact that he is considered black was just a bonus. In '12, it was literally the lesser of two evils.

Per capita statistics encourage a false assumption of sharing. It isn't measuring what most people think it is.

I was responding to this claim by jernfrost: "Growth per capita in Japan has been completely normal in the supposed terrible years. It is the population decline which causes the overall GDP growth to look anemic."
More

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: