Hacker News new | past | comments | ask | show | jobs | submit | ceor4's comments login

Same boat here, although I'm a bit worried about the downsize from 15" to 13". Perhaps with a docking station, it might not be so bad. But hopefully dell jumps on this opportunity to make a 15" developer edition


How's linux support in the XPS 15?


I'm currently running Ubuntu 16.04 on it, and everything I've tried works. But it has taken some fiddling. (I have swapped the wireless for [1] on the advice of a coworker that didn't even like the way the stock one worked on Windows.) I can't vouch for the USB3, Thunderbolt, or Bluetooth because I haven't tried it. Also stock Ubuntu can't control the backlight brightness but xbacklight can, so it seems to be an Ubuntu issue rather than Linux.

It also installed next to Windows pretty well, once I turned off Secure Boot in the Bios. Grub worked without hassle. You may also have some fiddling with the AHCI vs. RAID sections in the BIOS; google around for info on that. You may also need to reinstall Windows 10 from scratch to avoid issues with "locked blocks" on the main hard drive so you can shrink partitions, and to deal with the switch from "RAID" to "AHCI" (not strictly necessary but if you're going to reinstall anyhow it works). Depends on the line up.

I do recommend that if you get one, you figure out whether you're going to dual-boot immediately, before you "move into" Windows. It's much easier to put together a dual-boot setup with a fresh install of Windows 10, I found. (And it's actually Window's fault, not Linux's, for refusing to evacuate enough of the drive.) If you do plan to reinstall windows, go to the Dell driver page, grab all the relevant ones, and stick them on the Windows 10 installation USB stick you make. (There's a tool from Microsoft that just lets you download the Windows 10 install media, then picks up the license off the motherboard.)

Also, I found you will have to install a particular BIOS to make sure you don't get this weird and very annoying screen flicker where it dims for a couple of frames every ten seconds or so; A06 I think from the BIOS page.

It has not been the smoothest trip. But then, getting Linux running on a laptop perfectly never is. But I have ended up with a satisfactory primarily-Linux system. (I boot into Windows for games. Dual-boot is less frustrating when booting is so fast; I counted 32 seconds to hibernate Linux, reboot into Windows, and have Steam up and functioning. It's actually faster for me to do that from Linux than to start up my PS3 and be playing a game from a cold AV system start.)

[1]: https://www.amazon.com/gp/product/B0167N9R8E/ref=oh_aui_sear...


Is it really necessary to disable Secure Boot?

I've got one T430s that dual boots between Windows and Fedora, and Fedora is fine with the Secure Boot on (if you use your custom kernel modules, you will have to sign them, though). Windows, on the other hand, can be annoying when the Secure Boot is off.


Ubuntu nags you to do it so it can run 3rd party drivers. It's never really explained what the drivers were and I haven't noticed any difference with it disabled or not


Fedora and RHEL won't load any unsigned kernel module (yes, found out the hard way: I was wondering why VirtualBox doesn't run, even if it successfully builds it's own modules). However, you can enroll a MOK (Machine Owner Key) and sign whatever you want.

I was under impression, that Ubuntu does not enforce signing kernel modules even with Secure Boot on.


I don't know. I didn't try not to. I generally expect that the things that Secure Boot protect against are things unlikely to affect me.


Go for the Linux specific version "Developer" version for two reasons. First, the components it uses are identified by Dell to work with Linux well. Second, Dell pays the Linux vendor (Ubuntu in this case) to make the hardware work well: that's the only way to get good software support for PC hardware.


It was rough when it first came out, but works well now. The thunderbolt dock is also working now.


Well she already gets one of the best perks: queue-free bathrooms. Most of the new Amazon buildings are built with much less bathrooms to save floor space, but still have 50/50 womans/men bathrooms despite the huge gender bias.

Other perks include bread toasters if you're a SDE, ping-pong tables if you're in things like legal, and if you're unfortunate enough to deal with oracle there may even be snacks offered (only in a restricted area, obviously).

Your friend should negotiate very hard on salary, and perhaps even more importantly the level (assuming she's not straight from college). Most developers don't care much about titles and don't care if they're coming in as a SDE 1, 2 or 3 but Amazon makes a big deal about it. Being an SDE 1 will mean you can get shafted on everything from worse hardware specs (wtf?) and less hardware (e.g. SDE1's need to buy their own extra external monitors if they want one) and there's a lot less opportunities and discression until you get promoted which can be difficult


Hmmmm... So the designation/title matters in the long run. Understood, Thank You! :)

BTW, i think it is one of these positions she is being interviewed for...

https://us-amazon.icims.com/jobs/445219/sr.-industrial-engin...

https://us-amazon.icims.com/jobs/442313/operations-industria...

Any personal experiences specific to these locations/jobs? I checked out these Amazon jobs on Glassdoor but found only salary ranges, but no review comments.

Are there women in this field? Any unwritten expectations at the job she needs to be aware of?...


Frankly, each of your three paragraphs seems unrelated to each other, so I'm still not sure what your argument is.

1) Not type checking at runtime is exactly what people mean by static typing, because the definition of static typing means the checks are done before execution.

2) This does not make it harder to optimize, on the contrary if they generated type assertions it would considerable slow things down on the boundary of typed and untyped code.

3) C++, Go, Rust, Haskell they are fully statically typed, the only difference is they support varying levels type-inference which means you don't need to "type out the type" but it's really orthogonal to the whole optional/static/dynamic typing issue.


C++ and Go don't have type inference. All they do is set the type of new variables to whatever they're first initialized to. Type inference means figuring out the types of variables from how they're used. The litmus test for type inference is inferring function argument types.


I'm sorry, but that's not the definition of type inference.

"Type inference refers to the automatic deduction of the data type of an expression in a programming language."

https://en.wikipedia.org/wiki/Type_inference

Which you'll notice C++ and Go, are listed as languages with type inference.

But you're right that they don't have as nearly as strong type inference as rust, which doesn't have as strong as haskell etc.


Rust and Haskell have very similar type inference. The only major difference is that Rust doesn't infer type signatures, but that's a design decision, not a power issue.


> "Type inference refers to the automatic deduction of the data type of an expression in a programming language."

Depending on how simple "deduction" can be, that arguably includes every language that includes expressions.

And if the deduction is required to include usage information, it quite arguably excludes C++/Go style "var": a variable declaration is not an expression.

Generally, that Wikipedia article doesn't seem very good - check the talk page.


True, C++ and Go don't have a real type inference engine. They just compute the result type of an expression and use that to create the type of a variable in an assignment. However, this handles enough of the common cases to be quite useful.

(When you have a real type inference engine, you spend a lot of time trying to figure out what it did, or why it didn't do something.)


Nobody forces you to use type inference. If you think a particular piece of code is too tricky to understand without explicit type annotations, by all means use explicit type signatures.

That being said, I have never encountered a program that was hard to understand because types were inferred rather than explicitly annotated. I use explicit type annotations in two situations:

(0) As a compile-time analogue of printf debugging. Not exactly a joyous thing.

(1) At module boundaries, to control what modules expose to each other.


If you don't call what C++ and Go do "type inference", would you call it "type deduction"?


It really doesn't matter what he calls it, as type inference has a well accepted definition that includes Go and C++. See: https://en.wikipedia.org/wiki/Type_inference

Perhaps the parent was trying to talk about "HM type inference" or something, but "type deduction" is also a fine name for a subset of type-inference, and perhaps more explicit for C++ and Go.


Type inference (not just Hindley-Milner) is divided in two logical phases:

(0) Generating a system of type equations, by recursively scanning the AST.

(1) Solving the system of equations. The nontrivial nature of this step is what makes type inference, well, inference.

In Go and C++'s case, the generated type equations come out already solved (that is, of the form `TypeVar = TypeExpr`), so there's nothing to infer!


Perhaps you'd be best up taking it up on wikipedia and primary sources, instead of arguing with random internet people when using their same definitions.


I don't think it deserves a special name. Would you use a special term for the fact you don't need to specify that the sum of two ints is an int?


I disagree. Naming things is useful. "Type deduction" says clearly what happens, and is still distinct enough from "type inference" which is a different, more sophisticated concept. I've seen the term "type deduction" used many times in the context of C++ and Rust.


Rust has actual type inference. It just happens to be a local affair. For example, you can create a vector, even an empty one, without explicitly specific its element type. But, somewhere in the current function, you have to provide information about the element type, say, by inserting elements into the vector, or by using an element in a way that constrains its type.


The ability to name things is useful, but there are far more things out there than we can come up with good names for, so we have to decide wisely what things we name and what things we don't. Something like C++'s `auto` doesn't need a name of its own. It's just unidirectionally propagating type information from child to parent nodes in the AST. This is something even FORTRAN (when its name was all caps) and ALGOL implementations did during normal type checking.


Yeah, it's still alive and kicking. I try use it when I can, it has a lot of nice properties. As an unregulated industry it is abound with scams and fraud, but if you use common sense, and avoid anything remotely "get rich fast" or "altcoins" you'll do fine. If you're interested in it, I'd highly recommend signing up with circle.com or coinbase.com and buying $5 or $10 worth of bitcoin to play around


I think apple is pretty savvy to this sort of attack. They only banned him after there were thousands of fake reviews (supporting his app, and harming others) AND an account that was tied to his (same device ids and credit card) was involved.

I think apple acted pretty responsibly here.


> Why Does Apple NEED an acknowledgment that they didn't do anything wrong? What difference would it make?

I get what you're saying, but if Apples gets hammered with bad publicity every time their anti-fraud team does the right thing, they're going to stop doing the right thing.

Especially as this case is going to be cited for years to come, it's important apple has something to point to and say: "We didn't just arbitrarily ban the account, it was involved in manipulating our reviews"


>"We didn't just arbitrarily ban the account, it was involved in manipulating our reviews"

>anti-fraud team does the right thing

That's not what happened though. What happened is that Apple has their internal tools to link fraudulent accounts, to keep out bad actors, and this "good" account got caught in that web. You can just as easily argue that he didn't actually do anything wrong. After all, if he did Apple wouldn't even consider reinstating him. The fact that Apple linked the accounts internally doesn't actually point to any guilt or wrongdoing...It could even be pointed to Apple's policies arbitrarily hurting the little guy.

I can see both sides very clearly and I can see a middle ground very clearly. The only thing stopping this from being resolved is "bruised egos" on both sides.

Edit: Added in second quote


But what if the developer did do something wrong and the account is not so "good" as the developer is trying to make it out to be. There's too much that stinks about it:

1. Opened up a developer account for a relative 4 years ago. Relative. Yeah, ok. And 4 years ago... don't credit cards usually expire before then?

2. The same devices were being used on both accounts. While the info isn't available, I'm sure Apple can know if these same devices were still in active use by both accounts.

3. Dash isn't the problem. Too many people seem fixated on the notion of why the developer needed to do review manipulation on Dash when that's not at all the problem. It's the other apps on the other account that were the subject of App Store review manipulation. These apps contained descriptions that contained the developer's own email address in it: http://appshopper.com/search/?searchdev=603546869&sort=name&...


I do agree that he could totally be lying...but that doesn't seem to be Apples belief or they wouldn't reinstate him.

Also, if he really was guilty, why would he poke the bear after apple agreed to reinstate him...why not apologize and get off scott free.

It appears to me like both sides agreed to a set of facts and now its just a matter of setting the record straight. No one seems to want to admit fault and they are being childish about that since its in both of their interests to do so.


Interesting... and this is com.kapeli - seems to be related:

https://software.com/publisher/kapeli


I think when you sign-up, and the only ID they get is your credit card, it's pretty obvious that the account will be linked to you.

From the call, it appears that Apple only wanted a clarification in that direction, i. e. "I should not have given my drunk little brother the car keys".

They're not reacting from a "bruised ego", since a professional PR team doesn't get emotional in that sense.

They feel that the initial accusations have created actual damage for Apple's image, and they want him to stop the pitchfork-wielding mob.


I dont know who is dealing with this on Apples side...but its a mistake to think brands, executives, employees and even PR agencies dont get emotional and react from a bruised ego.

I bet the PR costs exceed the actual damage in this case. If they were really afraid of damage, they wouldnt come out swinging, they would simply apologize for banning the account and the public would forgive them instantly.

Both sides are acting against their own interests IMO.


> That's not what happened though. What happened is that Apple has their internal tools to link fraudulent accounts, to keep out bad actors, and this "good" account got caught in that web.

Imagine a family account at the bank. Husband is committing fraud, bank closes the account to stop fraud and 'good' wife cannot use her credit card anymore.

EDIT: afaik, other account also used the same identifier for their apps. Apple sees that there is a person/company who has fraudulent activity in one of it's accounts and bans that person/company. Simple as that.


But if Apple is ready to admit him back into the dev program, then the anti fraud team didn't do the right thing.


The guy has already admitted that the fraudulent account is from his "relative" who uses the same credit card and same test devices. It seems like Apple gave him a lot of leeway due him actually making a good app and being public.

While I'd like to give him the benefit of the doubt, the overwhelming most likely case is he was doing exactly what Apple thought he was doing. At the very least, he should take some responsibility for the fact he's paying for someones account who is actively trying to harm his competitors.

Apple probably could've avoided a lot of this mess by not overtly banning his account, but doing the except opposite of what his manipulations intended and make the app almost impossible to find.


Adblock Pro did that for me, disabling it on the website fixes things


In case you didn't read the article, it's actually an interesting dive into what it takes to improve the user experience of youtube in places with poor bandwidth. It's really worth a read, and while you're correct that youtube makes it's money from ads that's actually commensurate with YouTube Go's goal of improving the user experience and bringing it before hundreds of millions of additional eyeballs.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: