Markdown has emerged as the "lingua franca" of LLMs. It is structured enough to convey meaning, and yet readable everywhere.
However, as document lengths increase:
1. It is wasteful to put the entire context in memory
2. It can reduce accuracy by providing less relevant information
3. Increase the possibility of errors during edits
Inspired by Cloudflare's Markdown Content Type and Code Mode, we propose a lightweight document object model for Markdown to:
1. Reduce costs
2. Increase the accuracy of locating key sections
3. Increase speed
4. Increase the accuracy of editing documents
I use Camo (lifetime $79 license) and an iPhone SE I picked up on eBay for $200.
I've had dozens of people comment how clear my camera is. Many now purchased a similar setup. I used OBS for a long time since it was free, but the quality wasn't close.
If you're in a line of work that requires you to stream it's a worthwhile investment by a great company that just keeps getting better.
I used to use this setup but got tired of turning on the phone for meetings and doing that while it's in its stand or messing up the camera angle a bit. Camo also sometimes wouldn't connect right away which is especially annoying when I'm already running late. Did you find a way to mitigate those issues?
On timing – I just had to start planning a bit better. In worst case I'd start the meeting without video and turn on a minute or two in. Well worth it for me.
In the absence of an explicit regulatory framework, legality can probably only be determined after negative outcomes, unfortunately.
It also seems like without catastrophic retail investment losses, politically there isn‘t much to gain with implementing such a regulation, and potentially much to lose. Reacting too late to prevent harm seems to be the logical outcome.
Not true. Fiat currencies are backed by the consumer of last resort: governments. Everyone has to pay taxes using the official currency of the country. For instance, US$ is backed by approximately 25% of the GDP which the government collects every year in taxes.
well, if people decided to trust them even without the audit, then what's to say that it should be illegal? Investments have always been caveat emptor.
Investments were this way in the 1800s and early 1900s.
A lot of protections have been put into place since then. There's a reason we prosecute Ponzi schemes, offer FDIC and SIPC insurance, stress test banks, etc.
I agree with you that Tether is very risky, but I think your revaluation of the "digital coins" class. In its attestations, Tether claims to value digital coins as "cost less impairment," so effectively at the minimum value the coin has reached since its purchase.
With that methodology, it is erroneous to apply "down 60% from market highs" to estimate the current value.
That said, we can speculate endlessly about Tether's ultimate solvency because we don't have a detailed enough balance sheet. I think it's clear, however, that despite acting like a bank Tether's capital could not possibly meet Basel 2-like risk weighting standards. Even with completely above-board operation, Tether would be reasonably likely to collapse if we experienced a repeat of the 2008 financial crisis.
You're absolutely right regarding from market highs.
To do this correct you'd need purchases and sales versus assets.
All that said, it's why I put a rough estimate of 30% (instead of 60%) and hadn't included any losses in broader commercial paper (which has dropped). So the overall estimate is defensible.
Also agree on Basel 2-like risk. The primary issue with Tether is the marketing around stability and the transparency. If it was treated like any other "money market" with better NAV reporting, position reporting and rules around withdrawals then we'd mitigate some of the risks.
That's not necessarily true based on your link. Given Tether is invested in interest bearing instruments, the pool of capital should be move than the total outstanding Tether value. So even losing some by MTM at lower prices, they have a safety buffer...
It's fantastic to see great research combined with great tutorials – bravo. You all have bitten off an incredibly hard problem with pragmatism and tenacity. Keep up the great work.
To the authors: did you any of your own recordings? I've used my own and clips online, in WAV and other formats, at various sampling rates.
All of the results come back gibberish. The results in the training data seem just fine. Curious if you've tested the above to ensure it didn't overfit.
As I understand they do have a working prototype. I believe their chip is 3mm and works in pockets, etc. There's been so much hype, so little delivered in this area that I'm finding it hard to sift the wheat from the chaff.
Via the Lodash CLI you can "lodash underscore". This results in 6.7kb gzipped. In my experience, jdalton is a very solid coder and the lodash package is excellently maintained.
However, as document lengths increase: 1. It is wasteful to put the entire context in memory 2. It can reduce accuracy by providing less relevant information 3. Increase the possibility of errors during edits
Inspired by Cloudflare's Markdown Content Type and Code Mode, we propose a lightweight document object model for Markdown to: 1. Reduce costs 2. Increase the accuracy of locating key sections 3. Increase speed 4. Increase the accuracy of editing documents
https://github.com/brandoncarl/markdown-dom
reply