Hacker News new | past | comments | ask | show | jobs | submit | sagaro's comments login

In excalidraw I just have to click share session and anyone with that url can see my whiteboard and interact with it. I get tldraw has much more features etc. but how exactly is it making sharing whiteboard so easier compared to excalidraw?


I don't know maybe it was a skill issue from my side 2 -4 months ago , I felt as if I was forced to sign up back then

I am sorry I guess then for this comment , excalidraw also works great but I still just like tldraw because of how familiar I have become of this interface.

Shame that the licensing of tldraw is less permittive than excalidraw but I guess I am just a little bit okay with it considering its still open source and though I maybe wrong I had read the license , and it seems that it was focusing way more on that you had to have the name of tldraw / packaging of tldraw / copyright

here is the license restrictions

    Not to disable, hide, remove, or alter the Watermark.
    Not to disable, change, or interfere with the license key validation process that governs the display of the Watermark.
    Not to remove any copyright or other notices from the Software.
    Not to make the Software available under a license that supersedes or negates the effect of this License.
    Not to distribute the Software or modifications of the Software as a standalone product, but only as part of another application.
    To include a verbatim copy of this License in any distribution of the Software.
    To comply with tldraw's trademark policy.


CEA (Controlled Environment Agriculture) is not bad. When over-engineered and quickly expanded without working on unit economics it might be a bad idea.

Traditional farming is not viable near the place of consumption. It needs a lot of land, and land parcels that size near a city is impossible to find. And even if you find, it would rather be used for more lucrative purposes such as commercial properties, than farming.

So to make farming viable near the place of consumption (there by reducing the distance produce has to travel, there by reducing cost of transportation and wastage during transportation, there by selecting seeds which are less hardy for transportation but more nutritious and tasty becomes a possibility), we need to improve the yield of the farm and the consistency and flexibility.

A. Yield of the farm depends on -> 1. space requirement between each plant (which depends on the ability for the plant to absorb nutrients and access to light), 2. amount of light (the bullets) and 3. the amount of carbon dioxide (targets). Photosynthesis is nothing but when the photons in the light break the carbon dioxide bonds and release the oxygen to the atmosphere and carbon combined with hydrogen from the water becomes hydrocarbons (the mass of the plant). 4. Quality (same size, no nutrient deficiency like tip burn or spotting) 5. No pest waste 6. cycles per year

In a hydroponics farm, since nutrients can be dissolved into the water uniformly the space requirement between plants is lower compared to traditional land based farming, the quality is uniform as the nutrients density in the water flowing is uniform, the light (including artificial lights can be increased), the carbon dioxide within the farm can be increased from 400 ppm to 1200 ppm (increasing the targets). More cycles in a year are possible, layers are possible.

With all the benefits, those farms near the city try to improve yield enough to make the 1-2 acre farm near the city viable (as thought it is a 20-30 acre farm 100s of km far away).

The savings is the transport cost, the wastage cost during the transport, the wastage during quality checking cost etc.

B. Consistency

Like mentioned about, good seed selection, uniform nutrient dosing and controlled environment so no pest attack means similar sized produce. This helps with inbounding for Retailers. They have to spend less time and money on quality checking or managing sell-able period.

C. Flexibility

In farming, big retailers have all the power. The contracts are one way forced. If they have a contract with you for 5 tonnes of cherry tomatoes and you aren't bale to deliver it on time, they will penalize you. But say you have 5 tonnes of spinach which you have harvested as per the contract, they can always ask you as a favor "Hey unfortunately our inventory is still not cleared, can you delay by 1 week". Now when you are running a farm at capacity, such delays are not easy to accommodate, because the next set of plants that need to be transplanted from the nursery are ready and you need these plants to be harvested out so they can be planted here. Harvesting and keeping it is also not an option due to low shelf life.

Here is where playing with light and carbon dioxide inside the CEA is super helpful. You can increase the CO2 and increase the light to speed up growth and you can decrease the CO2 and light to slow down the growth of the plants. And this flexibility means, you give more tolerance to the uncertainty in forecasting for the retailers. You take care of their headache. And this is valuable.

I run a pretty successful hydroponics farm in India and supply to online retailers and we are their preferred suppliers purely because we take care of their uncertainties. Ours is not super high tech. Labor is cheap in India. We have some essential tech, like the lights/CO2 etc. But that's about it. We didn't over-engineer.


Transport cost in the USA is very low so that's not going to be much of an advantage for hydroponics here. The situation in India might be different.


It’s the same for all goods globally: If you are not in a hurry, the energy / co2 cost of getting it from a local warehouse to your house (optionally through a store) dwarfs the cost of shipping it via freight around the planet.

If you are in a hurry, then you need to ship via air, and then producing locally might help. (It depends on the gap between the efficiency of local production vs. the most favorable location on earth).


Correct. Which is why hydroponically grown produced focuses on low shelf life items like salad greens. And in countries like India where quick commerce is taking off (Zepto, Blinkit, BB Now etc.) it is helpful for the companies to source fresh produce locally many times a week as their dark stores are tiny and they can't have huge inventory.

For my farm and near by farms right inside the city, these quick commerce companies do milkruns and pick fresh produce twice/thrice everyday.


I wish there was a video of this in that article. The reading was very descriptive, but would have loved to see the video of the octopus playing with the bottle.


Might not really help if you are short on time and need someone quick. When I started my first startup (working on POC before even registering the startup), I was hanging on ultra niche internet pockets. Forums/reddit/niche twitter and having conversations there about where the industry was heading, what was happening, trends etc. And that is where I found my cofounder. We did some private dms, got on a call and then decided we wanted to build this thing. Started the company, scaled it, ran it for 4+ years, got acquired and exited.

Another thing you can do is go to the hackernews algolia and search of posts in your industry/domain. Find some of the smart answers with people who understand the domain deeply etc. Go to their profile see if they have some link to their twitter or something and connect with them. Again not super helpful if you are short on time. But leaving it here, for those who might like me want to find someone not in their network for some future collaboration.


I play short time controls like blitz in lichess. But for rapid I prefer chess.com as lichess has too much cheating. I find lichess UI/UX better and faster than Chess.com.


Unlike Project Euler etc. where one really needs to be good at algorithms/math etc to make the most efficient code, else it will just run for days to brute force, most of advent of code can be solved with terrible algo as the input data is very small.

I think most CS grad students can solve Advent of Code. Some people, probably don't finish it not because it is hard, but probably because they lose interest.


In my experience solving several years of Advent of Code, this is only true for part 1 of most days. A lot of part 2 solutions rely on heuristics to be solved in reasonable amounts of time.


There are some from 2023 that aren't. Day 12 part 2 at least for me had one input that ended up being 95% or more of all permutations and would have taken at least weeks.


I failed to brute force day 4 part 2


All these products that pitch about using AI to find insights from your data always end up looking pretty in demos and fall short in reality. This is not because the product is bad, but because there is enormous amount of nuance in DB/Tables that becomes difficult to manage. Most startups evolve too quickly and product teams generally tries to deliver by hacking some existing feature. Columns are added, some columns get new meaning, some feature is identified by looking at a combination of 2 columns etc. All this needs to be documented properly and fed to the AI and there is no incentive for anyone to do it. If the AI gives the right answer, everyone is like wow AI is so good, we don't need the BAs. If the AI gives terrible answers they are like "this is useless". No one goes "wow, the data engineering team did a great job keeping the AI relevant".


I couldn’t agree more. I’ve hooked up things to my DB with AI in an attempt to “talk” to it but the results have been lackluster. Sure it’s impressive when it does get things right but I found myself spending a bunch of time adding to the prompt to explain how the data is organized.

I’m not expecting any LLM to just understand it, heck another human would need the same rundown from me. Maybe it’s worth keeping this “documentation” up to date but my take away was that I couldn’t release access to the AI because it got things wrong too often and I could anticipate every question a user might ask. I didn’t want it to give out wrong answers (this DB is used for sales) since spitting out wrong numbers would be just as bad as my dashboards “lying”.

Demo DBs aren’t representative of shipping applications and so the demos using AI are able to have an extremely high success rate. My DB, with deprecated columns, possibly confusing (to other people) naming, etc had a much higher error rate.


Speculating

How about a chat interface, where you correct the result and provide more contextual information about those columns?

Those chats could be later fed back to the model and ran a DPO optimisation on top


Agreed.

Agent reasoning systems should learn based on past and future use, and both end users and maintainers should have power in how they work. So projects naturally progress on adding guard rails, heuristics, policies, customization, etc. Likewise, they first do it with simple hardcoding and then swapping in learning.

As we have built out Louie.ai with these kinds of things, I've appreciated ChatGPT as its own innovation separate from the underlying LLM. There is a lot going on behind the scenes. They do it in a very consumer/prosumer setting where they hide almost everything. Technical and business users need more in our experience, and even that is a coarse brush...


Welcome to AI in general.

Billions wasted on a pointless endeavor.

10 years from now folks are going to be laughing at how billions of dollars and productivity was flushed down the drain to support Microsoft Word 2.0.

AI is a bubble. Do yourself a favor and short (or buy put options) the companies that only have "AI" for a business model.

Also short Intel, because Intel.


Our theory is we are having simultaneously a bit of a Google moment and a Tableau moment. There is à lot more discovery & work to pull it off, but the dam has been broken. It's been am exciting time to work through with our customers:

* Google moment: AI can now watch and learn how you and your team do data. Around the time Google pagerank came around, the Yahoo-style search engines were highly curated, and the semantic web people were writing xml/rdf schema and manually mapping all data to it. Google replaced slow and expensive work with something easier, higher quality, and more scalable + robust. We are making Louie.ai learn both ahead of time and as the system gets used, so data people can also get their Google moment. Having a tool that works with you & your team here is amazing.

* Tableau moment: A project or data owner can now guide a lot more without much work. Dashboarding used to require a lot of low-level custom web dev etc, while Tableau streamlined it so that a BI lead good at SQL and who understood the data & design can go much further without a big team and in way less time. Understanding the user personas, and adding abstractions for facilitating them, were a big deal for delivery speed, cost, and achieved quality. Arguably the same happened as Looker in introduced LookML and foreshadowed the whole semantic layer movement happening today. To help owners ensure quality and security, we have been investing a lot in the equivalent abstractions in Louie.ai for making data and more conversational. Luckily, while the AI part is new, there is a lot more precedent on the data ops side. Getting this right is a big deal in team settings and basically any time the stakes are high.


> Around the time Google pagerank came around, the Yahoo-style search engines were highly curated

Hmmm, no. Altavista was the go-to search engine at the time (launched 1995), and was a crawler (i.e. not a curated catalog/directory) based search. Lycos predates that but had keyword rather than natural language search.

Google didn't launch until 1998.


Hotbot was amazing back in the day. Google scored me post high school schoolin', however, so I won't complain...Shoot Google helped me make the local news back in the day!


Is that right? You do all that at Louie.ai?


Yep. A lot more on our roadmap, but a lot already in place!

It's been cool seeing how different pieces add up together and how gov/enterprise teams push us. While there are some surprising implementation details, a lot has been following up on what they need with foundational implementations and reusing them. The result is a lot is obvious in retrospect and well-done pieces carry it far.

Ex: We added a secure python sandbox last quarter so analysts can drive richer data wrangling on query results. Except now we are launching a GPU version, both so the wrangling can be ML/AI (ex: auto feature engineering), users can wrangle bigger results (GPU dataframes), and we will move our own built-in agents to it as well (ex: GPU-accelerated dashboard panels). Most individual PRs here are surprisingly small, but opens a lot!


as someone building in this space, I am a bit surprised how many concepts you managed to combine in your last sentence. :'D

I will bookmark: ... and we will move our own built-in agents to it as well (ex: GPU-accelerated dashboard panels).


Those are pretty normal needs for us and our users. A big reason louie.ai exists is to make easier all the years of Graphistry helping enterprise & gov teams use python notebooks, streamlit/databricks/plotly python dashboards, and overall python GPU+graph data science. Think pandas, pytorch, huggingface, Nvidia RAPIDS, our own pygraphistry, etc.

While we can't those years of our lives back, we can make the next ones a lot better!


Mostly agree. I suggest to keep using ETL and create a data warehouse that irons outs most of these nuances that are needed for a production database. On a data warehouse with good meta data I can imagine this will work great.


I think getting clean tables/ETLs is a big blocker for move fast and break things. I would be more interested in actually github copilot style sql IDE (like datagrip etc.), which has access to all the queries written by all the people within a company. Which runs on a local server or something for security reasons and to get the nod from the IT/Sec department.

And basically when you next write queries, it just auto completes for you. This would improve the productivity of the analysts a lot. With the flexibility of them being able to tweak the query. Here if something is not right, the analyst updates. The Copilot AI keeps learning and giving weights to recent queries more than older queries.

Unlike the previous solution where if something breaks, you can do nothing till you clean up the ETL and redeploy it.


that is correct. GPT-4 is good on well-modelled data out of the box, but struggles with a messy and incomplete data model.

Documenting data definitely helps to close that gap.

However the last part you describe is nothing new (BI teams taking credit, and pushing on problems to data engineers). In fact there is a chance that tools like vanna.ai or getdot.ai bring engineers closer to business folks. So more honest conversations, more impact, more budget.

Disclaimer: I am a co-founder at getdot.ai :)


Agreed, maybe I wasn't clear enough. I don't view it as BI team vs platform team vs whoever. Maybe a decrease in the need for PhD AI consultants for small projects, or to wait for some privileged IT team for basic tasks, so they can focus on bigger things.

Instead of Herculean data infra projects, this is a good time for figuring out new policy abstractions, and finding more productive divisions of labor between different days stakeholders and systems. Machine-friendly abstractions and structure are tools for predictable collaboration and automation. More doing, less waiting.

More practically, an increasing part of the Louie.ai stack is helping get the time-consuming quality, guardrails, security, etc parts under easier control of small teams building things. As-is, it takes a lot to give a great experience.


There used to be a company/product called Business Objects aka BO (SAP bought them), which had folks meticulously map every relationship. When done correctly, it was pretty good. You could just drag drop and get answers immediately.

So yes, I can understand if there is incentive for the startups to invest in Data Engineers to make well maintained data models.

But I do think, the most important value here is not the chatgpt interface, it is getting DEs to maintain the data model in a company where product/biz is moving fast and breaking things. If that is done, then existing tools (Power BI for instance has "ask in natural language" feature) will be able to get the job done.

The google moment, the other person talks about in another comment, is where google or 1998 didn't require a webpage owner to do anything. They didn't need him/her to make something in a different format. Use specific tags. Use some tags around key words etc. It was just "you do what you do, and magically we will crawl and make sense of it".

Here unfortunately that is not the case. Say in a ecom business which always delivers in 2 days for free, a new product is launched (same day delivery for $5 dollars), the sales table is going to get two extra columns "is_same_day_delivery_flag" and "same_day_delivery_fee". The revenue definition will change to include this shipping charges. A new filter will be there, if someone wants to see the opt in rate for how many are going for same day delivery or how fast it is growing. Current table probably has revenue. But now revenue = revenue + same_day_delivery_fee and someone needs to make the BO connection to this. And after launch, you notice you don't have enough capacity to do same day shipping, so sometimes you just have to return the fee and send it as normal delivery. Here the is_same_day_delivery_flag is true, but the same_day_delivery_fee is 0. And so on and on...

Getting DE to keep everything up to date in a wiki is tough, let alone a BO type solution. But I do hope getdot.ai etc. someone incentivizes them to change this way of doing things.


The AI needs to truly be 'listening' in in a passive way to all Slack messages, virtual meetings, code commits, etc and really be present whenever the 'team' is in order to get anything done.


Or maybe the database documentation has to be very comprehensive and the AI should have access to it.


There are two ways to do a delivery on demand (like quick commerce or food delivery): 1. Take responsibility for the delivery. Here is a customer orders and a Restaurant accepts and prepares the food, you take the responsibility to find a rider and get it delivered. If there are no riders, it is not simple to cancel the order, as the restaurant has prepared the food. In this case, you have to take the hit on the delivery fee (surge price it) and pay out of your pocket. 2. Don't take responsibility of the delivery. Here when a customer orders and there is no delivery guy, the restaurant takes the customers money as they have prepared the food. The customer is pissed and the app will slowly die.


Netflix has a problem of multiple people sharing same account. So the number could be that of 2-3 people's watching time clubbed as one.


This is so true in this world of A/B testing everything. When A/B tests are poorly designed and results are interpreted for a short time frame, in my experience it always leads to poor decisions which impact long term.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: