Hacker News new | past | comments | ask | show | jobs | submit | blauwbilgorgel's comments login

Indication and Important Safety Information

What is the most important information I should know about Wegovy™?

Wegovy™ may cause serious side effects, including: Possible thyroid tumors, including cancer. Tell your healthcare provider if you get a lump or swelling in your neck, hoarseness, trouble swallowing, or shortness of breath. These may be symptoms of thyroid cancer. In studies with rodents, Wegovy™ and medicines that work like Wegovy™ caused thyroid tumors, including thyroid cancer. It is not known if Wegovy™ will cause thyroid tumors or a type of thyroid cancer called medullary thyroid carcinoma (MTC) in people.

https://www.wegovy.com/FAQs/frequently-asked-questions.html

edit: For a safer way to lose weight, including other benefits in regards to reducing cancer risks, look into intermittent fasting. Success with your weight loss, great step into becoming more healthy!


As a morbidly obese type II diabetic with unexplained hyperthyroidism, this will either solve all my problems or kill me.


> A recent review of the evidence suggests that this type of diet may help people with type 2 diabetes safely reduce or even remove their need for medication.

> However, people should seek the advice of a diabetes professional before embarking on such a diet.

https://www.medicalnewstoday.com/articles/can-intermittent-f...


your problems are solved either way, no?


I'd like to note that every drug has its sets of side effects, and that the use of GLP-1 agonists in clinical vignettes such as in the case of an obese patient with DM are seriously considered.

More specifically, pharmacological therapy of obesity is indicated in patients with BMI > 30kg/m^2 or >= 27kg/m^2 [1] with another aggravating condition, such as DM2 or hypertension.

[1] Pharmacological Management of Obesity: An Endocrine Society Clinical Practice Guideline, The Journal of Clinical Endocrinology & Metabolism, Volume 100, Issue 2, 1 February 2015, Pages 342–362, https://doi.org/10.1210/jc.2014-3415


Every drug has its sets of mild or severe side effects.

But few drugs, including Wegovy, have a Black Box Warning. Even fewer where the increased risks in humans are unknown at the time of FDA approval, and only backed by rodent studies.

https://www.ncbi.nlm.nih.gov/books/NBK538521/


Tried if. Couldn’t stick with it.


It is hard in the beginning, but it gets much easier after sticking with it for a few weeks. Your body and hunger adapts.


Not my experience.

I’ve tried them all.

Calorie counting. Keto. IF. Full on water fasting. Paleo. Meal prep / logging. Weight watchers. Everything works for a time. None are sustainable for me.


Just because a data aggregation site does not show your data on the front-end, does not mean they deleted it from the back-end. So now you can charge 50$ for people to search in the "special" data pile, where people took the effort to remove it from the front-end.

These data brokers crawl publicly available information. Telling them to remove your data, only slows down the doxxer, it does not stop them at all, since the data was already shared. It is not plugging the leak, it is mopping up some of the water. A false sense of security and a clear sign to the doxxer that you care about your anonymity (so more "lulz" to be had).

A proper doxxing is also much more than entering a name in some search engines. Especially hackers do not like to be doxxed. For internet civilians who already put this data out there (on social media) a simple data broker doxxing is a mere reminder that such data is public to everyone, not just friends.

Doxxing defense is guarding your anonymity online. Everywhere. Doxxing defense is knowing when to change persona's, and when to log off. That is: If you care about it at all. If you care about keeping your identity a secret online, see: https://www.youtube.com/watch?v=9XaYdCdwiWU (The Grugq - OPSEC: Because Jail is for wuftpd).


> Just because a data aggregation site does not show your data on the front-end, does not mean they deleted it from the back-end. So now you can charge 50$ for people to search in the "special" data pile

Do you have evidence of aggregators doing this? It's plausible, but I haven't heard about it happening.

> Telling them to remove your data, only slows down the doxxer, it does not stop them at all

I think you have a good point overall but let's not dismiss the solution in the ComputerWorld article, which is valuable. All security solutions do the same thing: They increase the attacker's costs, which stops attackers unwilling to pay the price. There is no perfect security.

For example, we tell users to use strong passwords on their Windows logons, but that only raises the cost of an attack and does not completely secure the machine.



Yeah,

The simplest defense is to completely separate any online persona from your "real" offline persona. Obviously, how separate you make them has to be proportionate to many enemies you expect.

But you are an actual public person who is public with their real name, I do not envy you.


Yeah, this isn't even mopping up water. This is exerting effort to lie only to yourself. The emperor has no clothes and the emperor is you.


The guidelines say to write as if you were face to face to a person. You wouldn't probably mention those nasty things, or at least you try to hide it behind constructive criticism.

Take this one step further: For everything you write, imagine the utmost authority on that topic reading your post. A blunt example: A rant about Python syntax. Imagine Guido van Rossum reading that during his coffee break.

Do not post if you can not add anything to the discussion. Assume your debater is smarter than you and knows more on the subject. This is still HackerNews. Ask someone if they have won the Putnam prize, and you may be unpleasantly surprised. [1]

Read and practice: http://www.paulgraham.com/disagree.html

Use your spell-checker. Write shorter sentences to avoid grammatical errors. Nobody is too smart for short, simple sentences.

[1] https://news.ycombinator.com/item?id=35079.


> ... how exactly none of those are the ones with the necessary PhDs in statistics and algorithms to get anything of any value done.

I see it almost the other way around: Companies strictly demand PhD's for Big Data jobs and can't find this unicorn. Yet we live in a time where we don't need a PhD program to receive education from the likes of Ng, LeCun and Langford. We live in a time where curiosity and dedication can net you valuable results. Where CUDA-hackers can beat university teams. The entire field of big data visualization requires innate aptitude and creativity, not so much an expensive PhD program. I suspect Paul Graham, when solving his spam problem with ML, benefited more from his philosophy education than his computer science education.

Of course, having a PhD. still shows dedication and talent. But it is no guarantee for practical ML skills, it can even hamper research and results, when too much power is given to theory and reputation is at stake.

In my experience Machine Learning was locked up in academics, and even in academics it was subdivided. The idea that "you need to be an ML expert, before you can run an algo" is detrimental to the field, not helping so much in adopting a wider industry use of ML. Those ML experts set the academic benchmarks that amateurs were able to beat by trying out Random Forests and Gradient Boosting.

I predict that ML will become part of the IT-stack, as much as databases have. Nowadays, you do not need to be a certified DBA to set up a database. It is helpful and in some cases heavily advisable, but databases now see a much wider adoption by laypeople. This is starting to happen in ML. I think more hobbyists are right now toying with convolutional neural networks, than there are serious researchers in this area. These hobbyists can surely find and contribute valuable practical insights.

Tuning parameters is basically a gridsearch. You can bruteforce this. In goes some ranges of parameters, out come the best params found. Fairly easy to explain to a programmer.

Adapting existing algorithms is ML researcher territory. That is a few miles above the business people extracting valuable/actionable insight from (big or small or tedious) data. Also there is a wide range of big data engineers making it physically possible to have the "necessary" PhD's extract value from Big Data.


While there's some truth in what you're saying, you sort of demonstrate a very common pitfall:

> Tuning parameters is basically a gridsearch. You can bruteforce this. In goes some ranges of parameters, out come the best params found.

This sounds so simple. However, if you just do a bruteforce grid search and call it a day, you're most likely going to overfit your model to the data. This is what I've seen happen when amateurs (for lack of a better word) build ML systems:

(1) You'll get tremendously good accuracies on your training dataset with grid search (2) Business decisions will be made based on the high accuracy numbers you're seeing (90%? wow! we've got a helluva product here!) (3) The model will be deployed to production. (4) Accuracies will be much lower, perhaps 5-10% lower if you're lucky, perhaps a lot more. (5) Scramble to explain low accuracies, various heuristics put in place, ad-hoc data transforms, retrain models on new data -- all essentially groping in the dark, because now there's a fire and you can't afford the time to learn about model regularization and cross-validation techniques.

And eventually you'll have a patchwork of spaghetti that is perhaps ML, perhaps just heuristics mashed together. So while there's value in being practical, when ML becomes a commodity enough to be in an IT stack, it is likely no longer considered ML.


This will be the first of this scale. On both those sites the most channels I could find was 64. This program is planning 10.000 channels for unprecedented resolution and scale.

Since 16 channels is enough to predict if a subject is looking at a face or not, it is exciting to research what this large-scale system is going to be capable of. Next to neuroscience, it could help with healthcare (research into dementia, epilepsy and schizophrenia).


The NeuroNexus Matrix array can be configured with up to 256 contacts. It's basically a Utah-style array with many shanks arranged in a rectangular grid. The difference from the standard Blackrock Utah array is that each of those shanks has multiple contacts along it. This is their newest product and you can buy it now, although I have no idea how well it actually works.

This technology is in principle extensible to much larger channel counts, if you increase the size of the array, the density of the shanks, and/or the density of the recording sites along each shank. I know that the Boyden lab at MIT has been working on this with the goal of simultaneously recording from >1000 sites. I'm not sure if they've met that goal yet, but they've been presenting on their progress at the annual Society for Neuroscience meeting since at least 2012.

One problem with standard microelectrode array technology is that it only works for targeting a small proportion of brain areas. You can only record from regions on the surface of the brain. Structures in sulci (folds) are difficult to reach because you'd have to pull apart the sulcus to insert the array, and many sulci contain blood vessels that will be ruptured if you do that. Targeting deeper structures like the thalamus (probably the most interesting target for schizophrenia) is intractable with this technology at least in primate brains because the electrode shanks have to penetrate a couple centimeters into the brain. Even if you could engineer an array with long enough shanks and manage to insert it without destroying it, you might do so much damage to brain tissue between the surface and the target that the resulting data would not reflect normal brain function.


The article wasn't clear on this, what is a "channel" in this sense? What is actually measured, from a physical perspective?


A "channel" usually means an individual contact on the device from which you can record a signal. A single channel may record "spikes" (action potentials) from dozens of neurons, but of those typically only 0-3 are distinguishable, depending on how close the electrode is to the neurons and how close the spike shapes are to each other. With ordinary metal electrodes, one can usually isolate action potentials from an average of 1-1.5 neurons per electrode.

However, not all channels are created equal. High channel count arrays typically have high contact densities. If the contacts are very close to each other (<100 microns or so), adjacent channels may record signals from the same neurons. I'm not entirely sure how this will affect the actual number of neurons one can isolate, but I'm eager to see. The contacts will record signals from fewer neurons than if the density were lower, but the increased density may make it possible to isolate signals that would otherwise be lost to noise, and will certainly make the process of determining which action potentials came from which neurons simpler.


I don't see a security vulnerability, but bad security practice.

Either they: Delete the account. All is well.

Either they: Take over the account. It is common sense then to change the phone number associated with the account. All is well.

You could solve this "bug" by reading the documentation and creating a better security protocol (which is currently putting your organizations' data at risk).

I clicked the title with just one thought it the back of my mind: "If this is an active serious vulnerability then why did OP not apply for the vulnerability program and have it fixed beforehand"?

My experience with the vulnerability team has been great (one honorable mention and one pay-out). If you did not get an honorable mention then it means the security team did not file a bug report. Your feedback could probably still be used to improve the UI.

As an aside: Hunting real security bugs on Google domains is insanely addictive (because they are so hard to find). Try to generate all their different error screens. Try to find the Google property running on aspx. To practice there is also https://google-gruyere.appspot.com/


Did Google notify you of a link-based penalty? You should set up Google Webmaster Tools, if you haven't already. Then you can read notifications about suspicious links pointing to your site and disavow the ones that are spammy.

I am not so sure that you are under a link-based penalty. I know you did not ask for this, but I had a look at your website's link profile, source and index health.

1. SEO: The online web form market is hugely saturated. You probably can not compete with Wufoo on terms like "web form builder". Honestly ask yourself if you currently deserve a top 10 spot for this term. Are you a top 10 player in this field? Explore more specific and longtail keywords. Create better targeted pages and page titles. "Documentation - NicSoft Software" is a missed chance. Create more content on the blog (inbound marketing).

2. Links. You do not have enough natural links to beat competitors. They get linked from webdeveloper forums by real users of the software. You also should check out Google's stance on "Powered by"-links. If this turns in the majority of your backlinking profile, you get links from a lot of bad neighborhoods. These links may thus do more harm than good. It is not an editorial link, but probably in exchange for a free version of the product. Much safer to nofollow links created for profit or SEO/online marketing purposes.

3. Site HTML is not well-structured for information retrieval. For every page on your site, the first heading is "Home". Browse your site with styles disabled. Reorder repeating boilerplate code below the relevant page content. Specify a canonical or make sure only one version is served to visitors with redirects (both www- and non-www versions of the site return duplicate content).

4. Index health is poor. Robots.txt file is indexed. There are a few inactive subdomains, an unattended Wordpress and Drupal install in a subdirectory. Includes are indexed as separate pages "/inc/footer.php". Documentation (the content "meat" if the site) is off limits for bots. Over 90% of pages on the site are in the secondary index, which is not a good sign.

About 10 hits a day from Google is far too low for any commercial site to survive on. You could get more than 10 hits on a random wordlist.

Do not solely think about in-links. Are you even linking to reputable sources yourself? End-node sites are far less interesting for visitors than hub sites.


Didn't ask for it? If I've learned one thing from this whole affair it's how understanding and kind complete strangers can be. Your feedback is hugely appropriated.

That said, a few thoughts:

1. The link based penalty stems from the fact the traffic drops correlate exactly with Penguin release dates. Also, the "official" volunteers and other good folks over at the Google Webmaster forms came to the same conclusion. Before those exact dates I had ranked on page 1 for "php web form software", and page 1 or 2 for "web form software", since mid-2008.

To that end, you're 100% correct the market is tough, but we're a very specialized form builder, made especially for developers.

As far as deserving Page 1 - Google used to think so for several years, as my product is pretty darn great if you need flexible form software. We have four versions, two of which are free, covering both self-hosted and SaaS models. No one even comes close to offering such a wide variety of solutions.

In short, if you need form software and you find my site you're guaranteed not to be disappointed. If nothing else you're able to choose between two pretty awesome free versions, something no one else offers. I'm also ecstatic to answer questions and help users out, even if they don't buy or use my software. I would think this would be of high value to Google -- and it certainly used to be before the penalty.

As far as the site recommendations: I'm taking these to heart for sure, though I have to say, the previously mentioned Google Webmaster forums -- the current site is a direct result of working with them for over 3 months, making the exact same types of changes and improvements. Nothing, and I mean nothing, worked.

I didn't know at the time, but it was recently stated by John Mueller from Google that a Penguin recovery is simply not possible until they refresh and release a new version of the Penguin algorithm.

So at the end of the day, sadly, (and please do not take this as dismissive as you bet I'll be making changes): my problem is not content or structure, it's that darn link penalty.

All the while, and again, I'm not sure who this benefits.


Let me first say that I am sympathetic to your situation. My startup was financially ruined by the Google penalties.

However, I'm not sure that anchoring yourself to "they used to think I was good enough" is a valid argument. Perhaps they were presenting bad results that benefitted you before, and have fixed that with better results now.

It honestly sounds like you're mostly upset because something was taken from you that you were used to having. That does not necessarily mean you deserved it previously.


Thank you for the understanding, and my heart goes out to you as well for your penalties. In a more just world real users would decide what succeeds and what does not.

My defense of the current site from a content perspective is simple: just go to it (www.rackforms.com).

I feel very strongly the look (clean professional), layout (easy to navigate), and technical details (fast, fully responsive, etc) are not causing it to be penalized from a content perspective.

My reasoning for why it should rank is also simple: it used to rank high, and none of that was an accident. I've always followed common sense methodology when creating it, such as creating content for humans, not search bots.

Never the less, since the penalty I've happily made dozens of changes, most of which were suggestions from kind and well meaning SEO folks -- none of them made a difference. Not a single one.

Conclusion -- my issue isn't nor has it ever been content, it's links.

Of course the irony is the vast majority of changes I made were to please search engine bots, not humans. Google has always said this is the opposite of what we should be doing.

For example, one guy suggested I used to word 'form' too much on my home page, and Google may consider this keyword stuffing. And so I pruned it from 23 to 12. Trouble is I sell web form software, and let's just say changes I made were, in many cases, a stretch -- many sounded decisively nonhuman. Who and what am I helping at that point? Google bot is clearly smarter than that. The kicker: the current page 1 site used the word 'form' 43 times. It's just silly voo-doo at this point, and no one except Google knows what the hell they're talking about.

Here's why this is all a bit scary: as link-penalized webmasters poke around the edges and make such changes, we're getting further away from a site that used to be looked upon favorably by Google from a content perspective.

The reason I cite "it used to be good enough" is common sense. Yes we can always improve our site and we should be: but a link penalty, mine especially, is so thorough and so unrecoverable so far, that making loads of other changes will very likely hurt more than help.

But again, just visit my site, visit the page 1 and 2's, and then consider I'm page 15 or lower. I have no doubt my site, should it's ranking be restored, would be a delight for Google users to visit.


I am not so afraid of the loss of attention. I think this takes a little getting used to, like using a route planner.

I didn't see a comment yet about placing a loose object in front of your face. During a head-on collision you will eat that thing. Don't even put a box of matches on your dashboard. CD's become chainsaws. And this hunk of plastic?

Every year, loose objects inside cars during crashes cause hundreds of serious injuries and even deaths. In this paper, we describe findings from a study of 25 cars and drivers, examining the objects present in the car cabin, the reasons for them being there, and driver awareness of the potential dangers of these objects. With an average of 4.3 potentially dangerous loose objects in a car‟s cabin, our findings suggest that despite being generally aware of potential risks, considerations of convenience, easy access, and lack of in-the-moment awareness lead people to continue to place objects in dangerous locations in cars. Our study highlights opportunities for addressing this problem by tracking and reminding people about loose objects in cars.

http://www.star-uci.org/wp-content/uploads/2011/08/ubic489n-...


Google has another program for startups without funding or accelerator program access.

https://developers.google.com/startups/

I applied and received $500 in credit for free (also free access to online training and local events). To get 100k in credit would of course be nice, but I would have no way to spend that much in a year.

In the light of this two-tier startup program already existing a lot of comments in this thread become uninformed. Stop looking a gift horse in the mouth (unless it is a Trojan).


It is just extrapolating the error rate reduction over the last few years. Spam filters have become better than moderators in labeling spam only in the last decade or so.

When computers first started to become faster than mathematicians this was really a breakthrough. The same is happening now with object and speech recognition.

The computer succesfully completes a task. That it is not how humans intuitively approach these same tasks is irrelevant for this accomplishment. What if the results were only half as good, but the system behaved more like humans, who does this satisfy?

The state-of-the-art is capable of detecting far more than 1000 objects, does not need labeled data, is robust to changes in light and does not care about the camera used. No preprocessing the data needed, features are automatically generated (preprocessing the target labels is a bit silly BTW).

So yes, in the very near future, algorithms will be better security guards than well... security guards.


My point is that extrapolating error rate reduction only applies to this tightly defined task.

You can only make claims about machines being better at "general" pattern recognition when we make progress on the issue that's stopped all Cognitivist General AI projects dead, which is that of situational awareness.

Arithmetic operations, spam detection and the task described in the article have a much smaller, and static, problem space than most human activities. You can demonstrably already knock up an automated-barrier style security guard. However, I'd argue that there does not exist an algorithm or appropriately weighted n-layer network that can handle all the ambiguity, countermeasures and ill-defined or contradictory situations that human security guards, or even just their object recognition capabilities, handle largely instinctively.


Do you think that computers are better at chess than humans? If yes, how does this relate to pattern recognition. If not, what makes someone or something better at chess, while still losing against a computer? Is that a beautiful move? Tactics? Irrational sacrifices to cause confusion?

Do you think that a machine's situational awareness can not achieve or surpass the level of a human? If not, what is holding the machines back?

Why do you think that instinct works better to create more rational, consistent and correct predictions? Are 100 security guards better than a single security guard at dealing with ambiguities? Do you think an algorithm to detect fights, drug dealers, and pickpockets from street cams can not exist? What if a NN could detect these cases faster and flag this to a human security guard for action/no-action.


> does not need labeled data

It needed training on 1TB of labeled images in the first place. Arguably it can be used to transfer that knowledge to other tasks with a much smaller amount of labeled samples but still requires supervision.


Google trained a NN on unlabeled Youtube stills. It was able to detect/group/cluster pics of cats without ever seeing a label. This still needs supervision to teach the NN that whatever name it created for this cluster, us humans call this "cats".

If the error rate gets low enough, a NN could start labeling pics.

Finally, recent work has shown that running a dictionary through an image search engine can yield high quality labeled images automatically.

Aside: Thank you for contributing to sklearn. Really feel like I am standing on the shoulders of giants when I use that library.


> When computers first started to become faster than mathematicians this was really a breakthrough.

Mathematicians do not compute numbers.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: