"with people having apparently similar but in reality completely unrelated problems"
Not sure what he was reading - this is a known problem with at least many early 2011 MBP - I had one (still do - it's in the closet). Apple is refusing to acknowledge this as a problem, and will offer to replace the motherboard (or something like that) for ... $300 and approximately 5 business days.
From what I've read, whatever they replace doesn't actually do the trick; by many accounts, even when you drop it off for repair, they've claimed to not be able to reproduce the problem. Hint: use the external video chip, or force a use of it.
FWIW, if you're in the triangle area, sixrig.com gives really good service, and got me a new refurb laptop the day after xmas at 9pm at night.
Wow: “The problem might be on the GPU” Said the “Genius”. That’s exactly what I was hoping to hear (he didn’t mentioned this until I pushed them this far.) So I replied, “You all knew exactly what’s going on here and you intentionally kept this from the customers, right?” Right this moment, the “Genius” has left.
They wouldn't run any advanced diagnostics on mine because I had 'too much ram' in it - model was rated for 8gb but I had 16gb. Not necessarily unreasonable, but... it worked fine for 2 years with that ram in it, and the problem was obviously video related. well, nothing's that obvious, but... they didn't offer to just unscrew the back, remove the extra ram, and run more tests - they seemed to want to not handle my issue at the store. "5 business days" without my business computer is a bit too much, and now I realize I need a fallback strategy. :/
There's a lot of variability these stories in terms of how you're treated by a Genius when you take your computer in. When I took my problematic 2011 MBP in, they pretty quickly determined that it was a motherboard issue and replaced it.
Two months after my motherboard was replaced the video glitches started appearing again.
Coincidentally (or not?) it happens that I use Coda every day, and Coda forced Macs (inadvertently, apparently -- Panic used Apple's guidelines on how to implement this "gracefully", but Apple's flag was buggy) to use the discrete card all the time.
Interestingly enough, once Panic disabled the 'use discrete gpu' flag in Coda, my screen glitch problems basically went away on my new motherboard.
Graphics switching on dual-GPU Macs used to require you to log out when you changed the GPU. They never quite got the automatic switching right but it's much better on the current generation. I very rarely see random red-and-white checkerboard patterns flash on the screen.
I have an early 2011 MBP and last year the same thing as OP happened. Ended up bringing it to Apple and did their repair depot option for $300. At least they extend the warranty for 90 days.
Not looking forward to this happening again, though. Especially since last time it came back with a couple of scratches on the bottom case that weren't there when it was sent out. I guess we'll see.
I experienced the exact same problems with my early 2011 MBP, and the blue screen of death was fixed with a motherboard replacement. The failing graphics card also seem to have done damage to my Apple Thunderbolt Display, but that is incredibly hard to prove so I do not know if Apple will fix that for free.
I must say that this is a bit disappointing for a $3500 piece of hardware. There should be recall procedures for electronics like this as well to protect customers from faulty production.
It's not that they "hold their value", which implies they'd sell for something similar to the original purchase price, and that's crazy. Rather it's that their value drops considerably more slowly than the average computer. A 3-year-old Mac is still worth only a fraction of the original purchase price, but a typical computer is worth only a fraction of that fraction.
The benefit of Macs is that they're high-end unix workstations optimized for usability. As far as I know Apple is the only company still making anything by that description.
I do not find Macs to be very "usable", and previous-gen ThinkPads outranked them in everything but the shiny-showing-off-at-starbucks metric. However, in the last generations, Lenovo has nerfed the ThinkPads (new ones don't even have a middle mouse area - no more buttons, either).
They've also even put the logo upside down so that you can show it off at cafes.
Is there a Thinkpad model you would recommend for day-to-day use (mainly Java programming and LaTeX writing), that is light enough to be carried around daily?
Ihe UX of OSX is not that of a high end unix workstation. High end unix workstations do not need homebrew. High end unix workstations do not provide the miserable, second class Xwindows experience that is found on OSX.
So I used "high end unix workstations" for years and years and years and OSX isn't perfect, but jesus christ I'd take its UI over just about any of the UIs from its competitors.
(and remember trying to compile anything on sunos and only having suncc? fuck that, imo, I'll deal with xcode)
The X Windows experience on my MacBook Pro is great! I get a OS that does pretty much everything my ThinkPad with Linux did, plus:
* wifi works out of the box and reconnects practically instantly out of sleep [1]
* sleep works without me spending two weeks recompiling Gentoo with different kernel options and finally discovering some boot flag that worked until Ubuntu broke it in one of their updates
* I don't need to edit my XF86Config file (maybe not necessary any more)
* no hassle of getting a compositing window manager working (hopefully compiz/emerald works better now than in 2008), figuring out how to disable gdm/kdm because the window manager options with the distribution are always terrible so I just start from .xinitrc because figuring out how to get gdm/kdm/xdm to run what I want seems to change every time I have to look.
* no more poor battery life
* filtering through low quality apps (KDE: crash-happy and UgLy), GNOME: pretty but light on the features); still scarred over X-CD-Roast...
* AirDrop!
* no arcane errors about sound because Ubuntu suddenly decided that I needed Jack (or maybe it was the other one), when ALSA was working just fine
* oh, yeah, whenever I need to run an XWindows program it integrates seamlessly into my other programs. So much so that I tend to forget to use Ctrl for the keyboard shortcuts instead of that Apple command key with the weird symbol on it. It even seems to cut and paste from the rest of the system reasonably well.
Yeah, I'm totally loving the XWindows experience on MacOS X!
[1] I switched to Ubuntu because wifi and sleep just worked, and then they just didn't work after some updates
* wifi works out of the box and reconnects practically instantly out of sleep [1]
You had a thinkpad with less than stellar wireless support? I have never experienced any hardware problems with my many thinkpads (close to a decade now).
* sleep works without me spending two weeks recompiling Gentoo with different
kernel options and finally discovering some boot flag that worked until Ubuntu
broke it in one of their updates
Installs gentoo, complains about recompiling...WTF? Personally I do not like sleeping laptops and encrypted hard drives. However I have many friends/coworkers that are less paranoid and they seem to put their thinkpads to sleep just fine.
* I don't need to edit my XF86Config file (maybe not necessary any more)
Ok, so it has been ages since you used linux on a thinkpad. I need to keep this in mind. I never have had any dificulty with x11 on my thinkpads.
* no hassle of getting a compositing window manager working (hopefully
compiz/emerald works better now than in 2008), figuring out how to disable gdm/kdm
because the window manager options with the distribution are always terrible so I
just start from .xinitrc because figuring out how to get gdm/kdm/xdm to run what I
want seems to change every time I have to look.
So you purposefully rejected the distros configuration system and are complaining about having to configure things manually? Xsession management is a breeze for Gnome on Debian (and I am assuming ubuntu). What did you have trouble starting?
* no more poor battery life
The thinkpad battery is not the best but it is pretty good. Given that it has been atleast 6 or 7 years since you used linux I think you would be surprised.
* filtering through low quality apps (KDE: crash-happy and UgLy), GNOME: pretty but
light on the features); still scarred over X-CD-Roast...
X-CD-Roast! Nice you are comparing 2014 OSX to 2001 linux user experience. This is getting silly.
* AirDrop!
How do you configure Airdrop to work with all of your other unix-like machines? I don't know how to do that with my Debian and OBSD boxes. Maybe it is an FreeBSD or NetBSD thing? Does it share with windows as easily as Samba?
* no arcane errors about sound because Ubuntu suddenly decided that I needed Jack
(or maybe it was the other one), when ALSA was working just fine
I have never been forced to install jack. But I do not do professional audio work. This is another problem you created for yourself. No operating system can save a determined user from shooting their foot off. But then again you seem to have trouble with a lot of the unix workflow so I guess I should not be surprised.
* oh, yeah, whenever I need to run an XWindows program it integrates seamlessly
into my other programs. So much so that I tend to forget to use Ctrl for the
keyboard shortcuts instead of that Apple command key with the weird symbol on
it. It even seems to cut and paste from the rest of the system reasonably well.
What Xwindows apps do you use?
> Yeah, I'm totally loving the XWindows experience on MacOS X!
I am glad I have found a happy xwindows on OSX user. Can you help me out with some things:
* Apple's website says "X11 is no longer included with OS X."[1] Do I have to be a Apple Developer to get Xwindows?
* I have had a lot of trouble getting awesome/xmonad/i3 to work in OSX. I swear it is almost as if iTunes is allergic to being a nice tile on my media workspace.
* For the life of me I can not get selected text to paste by middle clicking. How does that work?
* I use vcsh+mr+git to manage all of my config files. This way setting up a new machine is super easy. Can you tell me which directory to point vcsh for my iTunes/Pages/Safari configuration?
* The second step on a new machine is moving all my music over. I have some troubles with iTunes. Is there a special apple+option hotkey to get iTunes to play my flac/ogg files?
* Whenever I ssh into a OSX machine I always have trouble getting iTunes/Pages/Safari to honor my $DISPLAY variable. What am I missing?
Anecdotally confirming. When shopping around for the past 10 years, I've usually bought used PCs and used parts, both desktops and laptops, but after deciding to try out a Mac in early 2012, buying used just didn't seem to be worth the discount.
It doesn't seem to be so bad right now, but part of this may be due to my insistence on getting at least 4GB of RAM for a MacBook Air, which was hard to find on the used market.
Adding to the anecdotes[1], of the last 5 computers that I have owned, 0/2 of the Macs are still running, compared to 2/3 of the non-macs. Because of this, I refuse to ever purchase a used mac, the discount is not worth the risk, especially considering the fact that it's nearly impossible to repair them on your own.
[1]Random idea for a website/service: Computer reliability statistics by model/year. Could be really useful for people in the market for used machines. Not sure how/where you would get the data though. Seems like forums are overrun with anecdotes, but actual data is few and far between.
Macs are quite variable in terms of good models and bad models. This seems to apply to both laptops and desktops. Unfortunately. I also haven't heard of a website or service that has the reliability statistics you want.
You didn't mention Applecare. It is transferable and is for 3 years. So if you buy a 1 y/o machine from a hipster who wants to upgrade, you would be covered for the remaining time.
I didn't say the site was good, just the service ;)
Hadn't noticed ads before - I see one now. Might be trying to take advantage of traffic a bit, because they're primarily serving just one geographic market, but may get outside traffic? Dunno. Again, hadn't noticed before.
FWIW, I don't actually like the site - it always works slow on my mobile, and even on the desktop - some weird scripting stuff going on. That said, the personal support and attention I've had on my few transactions over the last year make up for that.
But given he's a happy customer of their hands on service and didn't even mention the site, I'm not sure the answer you'll get will be terribly relevant.
I'm sure it wasn't a great idea but we used to microwave things for the effect. At one point there were no lightbulbs left in our house because they'd all gone in the microwave. Your very own Aurora Microwavas from the comfort of your own kitchen.
Eventually the gas in the bulb expands too much and the bulb itself shatters. In my experience, other than having to clean up the broken glass, the microwave still runs fine. At one point we had a bulb where the glass expanded out enough to ease the pressure created. We could use it again and again.
Maybe it's a bad thing to do, maybe there's some horrific gas that's created. Someone on here could weigh in on that. It's an incredible effect though and I'd happily do it again.
Obviously, general disclaimers on looking after your own health and safety apply.
Energy-saving light bulbs are so dangerous that everyone must leave the room for at least 15 minutes if one falls to the floor and breaks, a Government department warned yesterday.
The startling alert came as health experts also warned that toxic
mercury inside the bulbs can aggravate a range of problems including migraines and dizziness.
And a leading dermatologist said tens of thousands of people with skin complaints will find it hard to tolerate being near the bulbs as they cause
conditions such as eczema to flare up.
The Department for Environment warned shards of glass from broken bulbs should not be vacuumed up but instead swept away by someone wearing rubber gloves to protect them from the bulb's mercury content.
In addition, it said care should be taken not to inhale any dust and the broken pieces should be put in a sealed plastic bag for disposal at a council dump not a normal household bin.
None of this advice, however, is printed on the packaging the new-style bulbs are sold in. There are also worries over how the bulbs will be disposed of.
Good shout, thanks for the link. I'm not sure the effect would really work with energy saving bulbs so there's no point using them anyway. Thanks for pointing out the risk.
Would be good to find another source other than the daily mail since they're scare mongering racists. They probably give the same advice about immigrants. (If there's one in the room take your family and leave for 15 minutes)
That was the first link I found btw. There are other links like the following from the Environmental Protection Agency http://www2.epa.gov/cfl/cleaning-broken-cfl
if you think there is something with the validity of the article linked previously. It raises the same issues.
Furthermore, the discussion was not specifying incandescent, but I specified energy saving just in case.
Those are just precautionary measures for people who are really worried about minimizing their mercury exposure. It's not a big deal unless you're breaking CFLs on a regular basis or dealing with some of the older fluorescent tubes; the amount of mercury they contain is tiny.
This is also explained in the last section of the EPA link. Nevertheless, it remains relevant in a discussion about putting bulbs in microwaves to have some fun. I would stick with incandescent, which apparently are also more fun.
Microwaves interact with metal in interesting ways. It will likely induce high voltages in the computer's wiring, causing the chips to release all of their magic smoke.
I grew up in a time when microwave ovens were first released. As such the very first thing you learned is no metal in the microwave oven.
I'm wondering if people who are younger somehow think of this differently or aren't automatically taught the same thing. I mean it seems so obvious (to me) that I wouldn't even think to point it out to someone actually. It's like saying "don't let the car run over you" or "don't play catch with the laptop".
This is entirely not true.
When microwaves were first released, they were so low powered that microwave cookbooks gave advice like "wrap the edges of the chickin in foil".
I don't think "most" microwaves have a metal rack, although they're not uncommon.
It's also not uncommon to see microwave-safe food containers that contain metal. I've seen grocery store deli soups, for example, that end up with a big ring of metal around the top when you open them, but can still be microwaved.
Still, "no metal" is a good approximation. "Unless it says you can use it" is probably OK to leave implied.
I've seen that as well. I'm wondering (from my own experiment) if it has something to do with angles vs. no angles [1] in terms of the metal in addition to the blocking by metal of the actual microwaves.
[1] Same as with stealth airplanes avoiding radar, right?
You're on the right track. The strength of the electric field around a charged, curved metal surface is inversely proportional to the radius of the curve. Sharp points have a very small radius, so they generate a large electric field, which can easily ionize air and thus cause arcing.
So spoons are okay, as they normally have no sharp points. Forks can cause problems, though.
Actually there's another issue here. If you stick an unopened can of food in the microwave, the microwaves can't reach the food; you get the same effect as if you ran the oven empty, which is that the field becomes very intense and risks burning out the magnetron or other components. The oven is designed to have its output absorbed by something, not just reflected back.
microwaves induce a current in metal. if there's a gap for that current to jump, you'll see it jump, and it'll probably cause problems including fire, but if there's no gap to jump, all that voltage potential just stays in the metal.
put a table spoon in the microwave and turn it on - nothing happens. i do this all the time when reheating soup.
i've won money on small bets like this too. non-technical people have no idea WHY metal poses a problem in the microwave, so i just bet them $10 that i can put a spoon in there without problems and they never believe me. they just think i have special spoons.
Preface for anyone reading this: I don't have any technical degree only what I have picked up over time. That said:
So in terms of the instructions "wrap the chicken" assuming the chicken were wrapped in a way that there was no crunchiness that produced metal gaps then it would merely block the chicken from cooking in the area wrapped, right? So in theory a nice idea but in practice people slap on the tin foil and then you have sparking? Hence "no metal in microwave" is really simply not being able to rely that the general public using the product will know the nuances (which makes sense). (Human behavior is something that I do know quite a bit about..)
Maybe you've done this one: You take a glass of water in pyrex (or coffee cup) and heat it just until it is ready to boil. Then you put a spoon or other object in and it explodes. Because apparently (I think..) breaking the surface tension is the issue (which can be done with anything if you don't remove carefully and shake a bit it will also happen).
Another thing that I've noticed is that water obviously boils at different times depending on the humidity in the room (I'm non technical enough to think that I figured that one out but feel free to correct me..)
I think that water in a microwave can get super-heated (hotter than 100C) if the surface is smooth, or there are no more little air bubbles to start the boiling. Then, when you put something in, it gives a surface for bubbles that are desperate to form something to form on, and they all do it at once.
I had a spectacular explosion with my glass teapot one time. The microwave at work was in a different room, so I would heat up the water to boiling, and then not hear the bell, so some time later I'd remember, and do it again. Apparently reheating causes the bubbles to basically get used up so there wasn't anything to start the boiling. So the third time I was standing there waiting so I wouldn't forget, and suddenly I hear a BOOM and the pot was half empty with a lot of water outside. I think there was just one giant bubble that eventually formed and blew everything out. (The pot was unharmed)
"Came out" refers to when they became in widespread use. That was in the 70's.
As far as low power I can find nothing that indicates that it was ever ok to put tin foil in however I do know that in some cases you might put metal to specifically block cooking. So perhaps with a low power microwave there was no sparking etc. (I can't find anything on that and I don't see the link you sent showing that (which page is it on in the cookbook?).
A regular oven provides heat from a heating element (which may be gas, electric, wood fuel etc), which heats the air in the oven. A microwave provides heat by using radiowaves to agitate responsive molecules (water is one) in the food - any warmth you feel when opening a microwave has come from heating the food, not the microwave itself.
When metal is in a microwave, the radiowaves hitting the metal create sparks, and sparks kill electronics.
This sounds like a BGA issue. They also had the on PS3/4 and XBox if I'm not mistaking.
The chip has a grid of small solder balls on the bottom instead of pins sticking out. Due to thermal differences during operation some rows can experience mechanical stress due to uneven heating of the device. When there are cracks the contact disconnects from the board. Some images here:
When it's cooled again the contacts join, placing it in the oven applies an even thermal load across the entire board and you basically anneal the cracks.
This happens all too often unfortunately. It's why you didn't see anything BGA packaged in the defence industry for a number of years -- they are not mechanically stable. I did have a reference for this but I can't find it now.
Also the multi-layer boards tend to bend when you repetitively heat/cool them resulting in the actual metal traces cracking inside.
Sometimes there's enough contact after this oven cycle for it to reconnect BGA packages and board traces semi-reliably but like hell I'd rely on this method for long-term stability.
I did a spell post-university reworking things that pick and place machines had screwed up and it was pretty much entirely packages like BGAs where there were arrays of solder connections. The production guys were always returning prototype devices due to mechanical problems on the boards as well and they were coming back with socketed LGAs and soldered PGAs.
> This happens all too often unfortunately. It's why you didn't see anything BGA packaged in the defence industry for a number of years -- they are not mechanically stable. I did have a reference for this but I can't find it now.
There is also a problem with rework. BGAs are hard to get off the board without destroying the board in the process, especially on multi-layer boards. For cheap commercial boards where the automatic decision is to scrap the board when the chip fails, that's fine. For $10k+ circuit card assemblies on a low volume defense production line, scrapping the board is a last resort. This applies to production defects as well as field returns.
> Also the multi-layer boards tend to bend when you repetitively heat/cool them resulting in the actual metal traces cracking inside.
Another problem is delamination (separation of the board layers). Delamination allows contaminants to get in on the traces and possibly start shorting things out. It seriously degrades the reliability of the board. That was the biggest problem for us when trying to rework CCAs. We had no BGAs, but we did have a card that used a few parts with thermal pads on the bottom. It took heat from both sides of the board to get the chip off, and it was very easy to apply too much heat and delaminate the board in the process.
And from a manufacturing/engineering point-of-view (which I am neither), I guess it's really hard to solve this problem.
You have some unpredictable 1/100 or 1/1000 defect that occurs long after production and sale.
Just how do you go about isolating the cause, and testing a solution? Make 5 changes, and put through a production batch of 1000 units, and then do accelerated testing? If 5 fail from one batch, and 2 from the rest, is there even enough statistical power to confirm that you've come across a solution? And you just burned through 5000 units.
Sounds like fun trying to solve this kind of problem.
There are PCB design/layout rules that deal with BGA. I'm not saying it's a 100% guarantee, but (much like EMC/EMI design rules) there are a lot of solid pointers that remove 90% of the issues. The remaining 10% are (again, much like EMC/EMI) subject to the layouter's level of experience.
Currently on mobile, can't link a PDF right now but if you Google " BGA PCB layout guidelines" you'll get a ton of documents.
Lastly: PCBs go through several optimization cycles, some occur after release for high volume stuff. There are always revision numbers of the silkscreen, sometimes they catch an issue like this after x1000 devices in the wild and do an update.
In production you would profile the boards. You take a board and run it through the oven with some thermocouples. You'd then set the temperatures of the pre-heat, heat, and cool down sections. This would heat the solder to melting point without putting too much stress on the components.
This is from memory from a long time ago using a teeny tiny little pick and place machine that did a few thousand components per hour.
BGAs were always always horrible to do.
"Design for production" is really very important and it's hard to find much information about it. Some simple little things can make the difference between an operator having to plonk a component on the board by hand every time just before it goes into the oven or having the machine do it. (Again, from memory).
Not sure if it is real reflow, doesn't seem warm enough. Maybe some contact point was not optimal, and by heating everything and the expansion resulting from it there was some repositioning causing the contact to work properly again. I'm assuming this because we had a similar, but inverse situation once. There was a small soldering problem that would only manifest after the PCBs got heated >50C.
But in the end it all comes down to luck, it's not like 'heat it' is the new tool that will fix all. (My MBP had a swollen battery, wouldn't dare putting that thing in an oven)
What I gathered from colleagues who had similar issues (and who also successfully fixed it using an oven), the soldering or something else tends to crystallise and form thin threads, eventually causing a subtle short circuit that b0rks your laptop. 170 degrees may not be enough for flow soldering, but it should be enough to melt those threads / crystals down and break the short.
The temperature is just on ok for reflow, but the board temperature wasn't ramped to the normal reflow curve. Stresses can cause problems as well, so he's certainly on the winning side of things that could go wrong. That indicates to me that the assembly quality is pretty good to begin with.
If you're talking about a $10,000 reflow oven with tight PID control where you set the profile and go you'd be correct. if you're talking about a kitchen oven with cheap setpoint control that has a tendency to waaay overshoot, you're best off to aim a few 10's of degrees low.
Depends on the alloy, tin/bismuth will melt around 140C. I would've expected over 180C to be necessary as well though. Sn63Pb37 is very common and melts at 180C, but since it has lead I don't think it's used much in large scale electronics manufacture.
The problem is that NVidia switched the solder connecting their chips to the board without switching the potting material they used to something with the new thermal expansion coefficient. So as the chip changes temperature the thermal stress is turned into mechanical stress, until one of the contacts breaks. The fix in the article will re-form the attachment, but you'll still be accumulating stress and it will eventually fail again.
Google for "bumpgate" for more information about this than you'd ever want.
Similar story:
I was at a friend's house, and we wanted to play Nintendo Wii. Tried to turn it on, but nothing happened. My buddy grabbed the power brick, tossed it in the microwave, and blasted it for 3 seconds. After plugging it back in, it started like nothing was wrong. I was flabbergasted (one: because WTF! who puts a power supply in a microwave? and two: OMG! it worked!)
Stupid Wii power supplies, these are known to be pieces of junk. I keep a spare for this reason. Simply letting it "rest" unconnected for some unknown duration also does the trick.
I heard it as the Wii power supplies are very protective against surges at the cost of being extremely sensitive to surges. I've done the "unplug it completely for a few minutes" to restore functionality a few times.
You might want to get rid of that Vertex 2 as soon as reasonably possible. It has an issue where at some point when switched on or resumed from sleep it will corrupt all your data and refuse to boot. Some have a different bug where it will silently corrupt some of your data and the only way to find out is by the HDD light being on for longer than usual. Back up your data and get rid of it. I had it happen to me twice (warranty replacement after the first one, but then second one did it too). OCZ died as a result of this. Save yourself before it's too late.
I had one Vertex 2 just up and die, but another one is still going strong on Macbook, but that one's just for surfing, so it's not critical if I experience a catastrophic data loss.
Will have to agree with you, the 840 Pro's are great.
Any idea if the Vertex 3 is in danger of this as well? I've been running it since April, 2011 without a problem - and have good backups - but I'd hate to waste the time trying to recover.
Reminds me of my first Raspberry Pi. It was from the first batch, and only worked if I gave it a blast with a hairdryer. Once it warmed up it booted fine, but if I turned off the hairdryer it would turn off after about 30 seconds.
At the time it was about a month wait for a new one to arrive, so I did a lot of initial Pi discovery with a hairdryer.
Tried it on mine, no luck. Basically it didn't run properly because the resistance of the fuse was ridiculously high resulting in a voltage drop of almost 1V. Meaning I had to power it with a 6.5V power supply. Well, you don't find those easily so I just got rid of the fuse.
This is a known method for fixing this type of problem where the solder has been damaged due to excess heat and/or improper cooling from the start. It's also known that this is most likely a temporary fix where it will fail again in a couple of months (depending on the usage etc.).
This is also why I would take great caution purchasing a used laptop (especially with a discrete graphics card). It's a real risk that the previous user did this trick to quickly sell it while it's "working".
As an owner of a 2011 Macbook Pro with AMD graphics (known to fail) I sure hope that Apple acknowledges this issue soon.
This is why I obsessively check the oven before I turn it on. Always.
We had an incident when I was probably 5 years old when my sister put a stuffed toys in the oven then a while later my mom turned it on to use it. Then we smelled smoke...
There was no lasting damage (beyond my sister losing her favorite toy) and she of course was much too young to know better. That incident however got me paranoid about me turning on the oven with unknown objects in it. So now I check first.
I printed the article, the GPU kernel panic log and took them to the Genius bar. Apple replaced the mainboard for free, more than 3 years after the purchase.
It took me more than a year of dealing with random crashes and several visits to the genius bar before I found this article.
Though the downside is that you'd miss the excitement described here...
I have searched the definition of existence out of this issue but have never found this article, if I had just seen this article before the 3 year mark. But I have already solved the issue by installing Linux and not installing the Nvidia driver. I will buy a new laptop soon anyways, but I will still run Linux tough. I will try OP's method when I get a new laptop.
At some point I spilled water all over my MacbookPro and I turned it off and let it dry overnight. The next morning it refused to turn on. I took it to Apple and they say they must send it to repairs for 750 dollars!!
I refused to pay money, went home, asked some friends about it, and concluded that I should put my computer in an inverted position like a teepee.
Same exact thing happened to me! Brand new hot starbucks green tea spilled all over my 4 month old MacBook Air mid-2013. I was devastated. Stupidly I tried to turn it on right after the spill and it turned on for a moment then shut off and wouldn't turn on again. I put the keyboard face down on paper towels and indirectly in front of a fan for a few days and it came back to life! I now have a keyboard spill guard of course...
If you have a gas oven with a pilot lite, it's the perfect place to let things dry out (don't turn it on, just let the small flame of the pilot lite dry things out). These seem to be more rare nowadays, though.
I recall some years ago (7?), some other mac having some characteristic issue that could almost always be fixed with a little time in the oven.
Having a kitchen with an oven and living in a student area where Apple products were popular, I sensed a business opportunity.
I put up an ad on some local classifieds with a lowball price for these units (but not much different than what the broken ones would sell for on Ebay, minus the hassle). I quickly learned that, after investing in a premium product, people would rather hold onto their brick rather than turn it into at least some cash. I never even got a chance to try out the procedure, people would counter-offer with ridiculous prices for, what is for them, a brick.
The sunk cost fallacy at work.
edit: I think I even offered pickup and some data recovery/security as a part of the offer, no takers.
Remember that there might be a law in your country that regulates complaints.
In Sweden we have "reklamationstid"[1] which gives you sort of a legislated warranty (applies to almost all goods) for three years.
All goods that I have complained about using "reklamationstiden" have been replaced.
Yes, many people don't know about this law. The shops know this and takes advantage of it.
Of course there is also the manufacturer's warranty, which lasts for as long as the manufacturer decides.
Why have that law, and then make it legal for the shops to not acknowledge it up front? Why does the law have to be some weird kind of game where only people in the know benefit from it?
I think this process is really baking out moisture as the PCBA is not in the oven long enough to get to temperature and the temperature is not hot enough to reflow the solder.
Most of the components on the board are not hermetically packaged and there probably are moisture sensitive parts on the PCBA. So the baking at 170C for 7 minutes is essentially a drying process.
Also, I think Apple probably has to be ROHS compliant and that means they have to use lead-free TnSn solder. The transition temperature for lead-free solder is typically about 20C higher than normal leaded solders.
I've done this twice on different laptops. It works like a charm :)
The explanation I got when I did it the first time was this: When the computer heats up and gets cold a lot, over time the solder joints on the motherboard might "crack", or something, effectively giving loose conncetions. By heating the joints in this controlled manner, the metal melts and solidifies properly when cooled down.
Actually, a lot of electronics from this era failed because they manufacturers were still experimenting with RoHS (read: no lead) electronics production, especially lead-free solder.
While heating a RoHS PCB in a food oven might still be kinda toxic, at least it's lead-free ;).
This is due to failure of the ball grid array solder, I'm assuming? Most of what I've seen has been due to failure of the video card - I think the nvidia ones were notorious on certain models, but I've had it happen on my 2009 iMac with a radeon card.
Sort of what sold me on integrated GPU's as being the ideal in these form factors - less parts to blow up in your face....
>> This is due to failure of the ball grid array solder, I'm assuming?
On the longest 2011 MBP GPU issue discussion on Apple Support, there was some speculation that the switch to weaker "green" solder by Apple caused the issue. Some people had their GPUs reflowed at an independent service center, and it fixed the problem, but for others, it was only a short term fix, with the GPU glitch returning after a few weeks/months.
I can confirm that my 2008 iMac had this same problem with the nVidia card. And yes, cooking it in the oven did bring it back long enough to grab the few items that I wanted. At the time, I read that it had to do with Apple switching to lead free solder. Sad to know I may face this with the 2011 MBP that I am typing this on (and I love so dearly)
I used to work at Cray, the supercomputer company. Back in the 1970s, 30 yrs before I got there, they used to solder the circuits on each board by baking the entire board in an actual kitchen oven. With smaller and more delicate circuitry on mainboards these days, this is less practical and more dangerous, but that doesn't mean it isn't still possible with some degree of luck. I'd still recommend removing the BIOS battery if you're crazy enough to attempt this.
It makes me sad to read how some mistreat their laptops. Why keep it on 24/7? Even if you're using your computer 12 hours a day you're letting it burn for nothing for the other 12! And then you buy a new one every year.
Electronics likes being in a steady thermal state, and mechanical devices like a hard drive have less wear when continuously spinning. It's the spinning up and down that causes a lot of the wear.
This has been an issue as far back as a decade ago with G3 iBooks. I had my G3 800's logic board replaced twice under warranty, but it failed again after that. The 'hot coin on the gpu' trick solved the problem, albeit temporarily.
I gotta say, I would've never thought of baking the motherboard. Makes sense in regards to overclocking and as the op stated, he constantly ran his MBP. Baking, I guess, would kinda resolder connections...total guess.
My recommendation, I repair devices all the time now, after you remove all the little screws and the piece you need (like a motherboard or pcb), take the time and put every screw in the hole it came out of. Trick I learned as a mechanic. It helps prevents losing them and guessing, since sometimes screw threads and height vary.
Yeah, if I'm running a fast fix (like pull a battery, ect.) I'll usually arrange the screws in an orientation to how I pulled them out, but long fixes like the op had (or when your waiting for parts)... I put the back in.
Not a bad idea with the masking tape, guess a magnet would work too.
I had a somewhat similar issue with an Acer Aspire Aspire One ZG5 netbook, except my problem was that the device would power off after about 1.5 minutes from powering on.
Apparently a thermal cutoff switch needed some recalibration/replacement, so my fix was 2 hours in the freezer followed by a few more hours waiting for condensation to evaporate before attempting to power the netbook on again...
I used the above method for nearly a year before acquiring my present laptop, relegating the netbook to my growing pile of disfunctional hardware.
If you use a temperature hot enough for reflow (200C or more, according to andyjohnson0), is there any risk of heavy components on the bottom side falling off?
Except for extreme cases, no. The cohesive force of molten solder is more than enough to hold your components in place. Also interestingly enough, this is also what causes surface mount components to automagically "settle" over their pads when you have the proper amount of solder/flux.
This was a "well-known" (quoted because it's well known as far as enthusiast communities are concerned) fix attempt for discrete GPU cards on desktops. I've seen a lot of success stories where old GPUs (think GTX 8800) would crap out; owner would take off the shroud, heatsink, and any other removable parts, put it in the oven for a few minutes, and boom, reflowed solder fixed the issue.
Dealt with similar issue with HP tx2000 (HP is a horrible company that refuses to acknowledge a problem that has plagued their entire series).
I removed the processor, applied fresh thermal paste with a copper shim & gave it the oven + hair dryer treatment. It worked for a while but failed again. Not worth wasting time on it. It's better to ebay it for parts.
I used to have an HP with this problem as well. Eventually I managed to get my files off the computer, and then promptly bought a new, not-an-HP laptop. Never again.
Did it too successfully 3 years ago with my Acer laptop. It worked but as you're saying, didn't lasted so much. Anyway was pretty cool ;) Pic here: http://imgur.com/iDaYpBD
This is astonishing... But anyone else feels like this sounds like trolling? There is no actual video, but only pictures. And perhaps, maybe it's just me, but I don't see how high heat can resurrect the logic board?
I had ram slots die in my mbp due to warping. the solution was to loosen the screws around the ram slots, letting the bent board unbend. I imagine cooking would soften things up and allow loose connections to better connect.
I used a Sterilair (a small heater that supposedly cleans the air) to ressurect a Sony camera. It also stopped working with no obvious reason, I suspected of humidity and left it hung about 30cm above the heater for one day.
I resurrected my old 2009 Macbook Pro that was doing something similar by turning it on and putting it under a blanket for an hour. It generated enough heat to reflow whatever BGA had busted and has worked perfectly since.
Be careful not to use vigorous airspeed. It's possible to blow balls of solder away from the pads. These then cause shorts and are annoying to fault find and repair.
Yes, on a BGA the pads are under the IC package. That BGA will be on a PCB close to other ICs which has other packages. Using a hot airgun risks distributing small balls of solder.
If you can accidentally create solder-bridges with a hot-air rework, then there was too much solder; even if you're talking about fine-pitch TQFP's. I've accidentally made bridges using QFN's but only on crappy homemade PCB's and again, with too much solder. It is usually easy to correct. You can dislodge nearby capacitors, diodes, resistors, etc. but that's about it.
On a good PCB that's been cleaned properly there won't be any errant solder-balls, and surface tension will not allow them to form even when using a hot-air gun or rework tool. I've never seen a hot-air tool that blows with enough force to overcome the surface tension of the solder.
I have managed to get make a mess with solder when removing IC's using compressed air (not for the faint of heart), but that's a different story and it is still easy to clean up.
You're talking about people who know what they're doing, using correct hot air rework tools.
I'm talking about people who don't know what they're doing using paint-stripper style hot air guns.
I have seen faulty PCBs caused by people using that style of hot air gun to rework devices. I used to have photographs but don't have them any longer, but there should be photos in some of the "soldering problems" engineering books.
I confirm here that one day, about ten years ago, I managed to save a harddisk on which I had important data for which I had no backup (well, I had backups but I had lost/forgot the gpg key to decrypt them which in a way is even more stupid than not having backup at all: false sense of security).
You could hear the HDD "wanting" to start but failing. I searched the net like crazy because I really needed the data kinda badly: I even considered trying to find an identical, used (but working), HDD and swapping the controller.
Eventually I found a message (somewhere on Usenet I think) saying that some failing drive may start when cold enough... So I did put the HDD in the fridge. After the 2nd try I managed to boot it and to copy all my data and it's the last time that that drive booted!
So, as crazy as it sounds, the fridge/refrigerator trick was working in some cases... And I take it that the grill/heat thinggy may work in some cases too :)
I unbroke an iPod 5G by slamming it into my desk fairly hard. That was about 3 years ago and it's still working. Something about the HDD bearings seizing because of shock.
I have a Seagate HDD in the freezer for 2 years now. Every method of recovery I've used has been unsuccessful. I'm waiting to send it into a recovery service, and $1500+ that I haven't got lying around.
If the drive has files that you haven't "needed" in over 2 years, do you really want to spend $1500 or more to access them? Must be personal photos or videos?
I'd hate to lose my personal photos. I'm glad that Apple came out with Time Machine, because I now have backup religion; not only time machine backups but offsite CCC backups.
While doing desktop support, one of my Mac user's hard drive was failing, wouldn't spin up at boot. So you had to help spin it up. One trick was to give the HDD housing a rotation jerk the moment you turn on power. Another was to use a pencil eraser to spin the exposed spindle, also during power on.
I've talked to a few recovery experts. They told me taking the cover off destroys the drive. Apparantly the head has some tracks embedded in the cover. If it's not removed in a certain way you can rip the entire thing out.
Learn how computers work, some mechanical physics knowledge would be helpful. You need a bit of education in order to rely safely on Googled advice, and if in doubt: don't do it!
Did not read story, but if it it what I think; HP had the
same problem with a bad nvidia chip, poorly designed
cooling system, etc. They denied the problem, until
sued. Meanwhile, desperate users where putting their
motherboards in the oven, curling the hairs in the nvida
chip, and sticking pennies on the hot chip. I will never
buy another HP laptop! I'm surprised Apple won't admit to
a problem. I think if Steve was still alive he would admit
to the defect and offer a fix--why?? Because Steve was
one of us--he liked his toys, and expected them to work.
A video card problem would have irked the hell out of home,
and his engineers would have to pay for their mistake.
What's his name--Tim Cook--just worries about the books.
Which will eventually be Apples fall from grace.
Right. It's not like the story was War and Peace. Not even the Cliff's Notes of War and Peace. It took about a minute for me to read, including time looking at the pictures.
Not sure what he was reading - this is a known problem with at least many early 2011 MBP - I had one (still do - it's in the closet). Apple is refusing to acknowledge this as a problem, and will offer to replace the motherboard (or something like that) for ... $300 and approximately 5 business days.
From what I've read, whatever they replace doesn't actually do the trick; by many accounts, even when you drop it off for repair, they've claimed to not be able to reproduce the problem. Hint: use the external video chip, or force a use of it.
FWIW, if you're in the triangle area, sixrig.com gives really good service, and got me a new refurb laptop the day after xmas at 9pm at night.
Source: apple forums and http://mbp2011.com