Hacker Newsnew | past | comments | ask | show | jobs | submit | hitmyapi's commentslogin

Yep love the ease of Heroku, just not a big fan of the $7/mo per container model since I have a lot of apps to host. Dokku looks like a great option for getting the Heroku experience on cheaper compute though, will check it out!


Oh interesting, I didn't know cloudflare offered serverless environments. I'll have to look into them, sounds like the free tier is generous with 100k requests per day


Oh adding a CI pipeline into the server is smart, that probably makes deployments a breeze. I mainly deploy manually with scp, would be awesome to automate that with a CI/CD flow haha

Same here on containers, in theory they seem great but my projects have been small enough where a container feels like more hassle than it's worth. I might need to just take the time to learn Docker properly though


These look like great options, I'll check them out. Dokku looks like it could be a good balance between offering a managed server experience while letting me deploy multiple apps to a single instance


It could potentially be "feature flagged". The client could ship a new version with all the resources/code they need for a specific feature, but a setting on the back-end determines if the feature is enabled or not. That way, Apple could easily toggle this feature on at any point by changing the flag's value on the back-end, without needing to ship an entirely new 15.x.x binary


Hmm interesting, does the "move phone around" message ever go away? I'm working on tuning it still, but you might have to walk around and pan the camera around the area a bit more (looking into optimizing this more in coming updates). It's currently setup so ARKit can calibrate itself during that time, and that message should disappear once it's done calibrating. Once it's ready, a message like "Tap on the wall to see your artwork" should show up


I tried this morning in a room with more light, and it did indeed go away.


Hey there- that's a good point. My decision for the iOS 15 requirement was because some new API's were introduced that made development a bit easier, but I think I can add support to iOS 14 with a bit of TLC. I'll look into that, appreciate the feedback!


Hi, I also got stopped due to the iOS 15 requirement. Honestly I didn't even know what version of iOS I was running, and the phone never asked me to upgrade. Now I'll go ahead and upgrade.


For some reason, Apple isn't nudging iOS 14.x users to install 15 yet - though they are still pushing 14.x updates.


Yeah, I’m happy to try it but not happy to update to iOS 15 yet, so one more vote for iOS 14 support here.


Hey all, wanted to share a new ARKit app I recently published. You can choose a photo from your gallery, specify its dimensions, and see how it'd look directly on your wall!

Inspiration for this project came when I was looking to buy artwork over the summer, but wasn't sure what size would look best on my wall. I made this so ideally you could preview how a piece of art would look on your wall before you buy it. Would love any feedback on it


This is off to a great start! A few initial UX suggestions:

- Most users are probably just looking to see how a single copy of something looks on their wall, so instead of subsequent taps adding subsequent copies, they should probably just move the existing one.

- My iPhone doesn’t have LiDAR, so it’s somewhat hit or miss whether ARKit hits the actual wall, or ends up a foot in front of it (or gets the angle wrong). Consider offering an adjustment mode to manually slide the artwork into place.

And now a product suggestion:

- Consider pitching an integration to Society6, one of the largest marketplaces for wall art. It would be a huge value add for customers to be able to view prints directly from creators’ catalogs with pre-filled sizes, and I’d imagine you could work something out that would support this project well into the future.


Love those suggestions, thank you for your comment! Definitely agree on adding better support for the non-LiDAR devices, I've noticed it's really hit or miss when placing and should be easier to correct


AR camera shots often have an uncanny valley appearance.

It would be cool if there were an option to have the initial simulated image ‘annealed’ with a generative adversarial network.

Maybe you could also partner with local frame builders (ask for 2% of sales or something) to send clients your way, since, so often, the frame has a large effect on the art’s aesthetic impact.


I think the uncanny valley thing is because the superimposition does not take into account the lighting conditions of the room.


Yup the superimposition is currently setup to ignore light in the room. I experimented with having it respond to light in the room, but ran into issues with the image rendering way too dark. I'll spend some more time fine tuning it and seeing if I can get more realistic light rendering into a future release


Isn't determining lighting without some reference card a "hard" problem? How do you tell the difference between a yellowish room illuminated by white (sunlight) versus a whitish room illuminated by warmer yellower bulbs?

I am a not a photography person so this a genuine question asked from ignorance


Yup that's a great question, I think generally speaking that is a really difficult problem in the AR space, which I think is part of the reason why most AR models still look fake or have the uncanny valley.

Using the ARKit libraries, it gets handled automatically for the most part. I think there's a lot of computer vision/image processing algorithms running in the background that potentially pick up on shadows/reflections in real world objects, which help it determine the white balance/color temperature of the room. I'm no expert on it myself though and not 100% sure on the science behind it


Go all the way and make a "buy now" option. That would entail partnering with printers and framers to offer standalone posters and framed versions of standard / custom artwork.


As someone who custom builds a lot of picture frames, the ability to add frame + mat ratio to the image would be great. Those can more than double the linear dimensions (4X+ on the area) and are a pain to properly size for the wall.

For example, something like:

+3" dark brown frame (for walnut, wood texture would be a plus)

+2" left/right; +2.5" top, +3" bottom white mat board (reveals are not usually uniform in custom work)

I currently spend a lot of time mocking up these ratios in Gimp before starting work, something that automates it and shows it on the wall would be amazing.

Adding depth would be a nice touch too if that's possible, stretched canvas can stick out 1.5" or more.

Great work though, I like it so far and might be able to use it to help my mockups a bit already.


Cool project!

I have a pretty extensive niche art collection at home, and this sort of thing would be really useful. If I could suggest one important addition...

It would be great if you could set a surrounding frame and matting. You could maybe offer a few different frame styles/colours, and let users access a colour picker to select the shade for the matting. Then you could project a "framed" piece of art on the wall with AR.

This would make it really easy to see how a new piece would look in context, alongside other artwork which has been professionally framed. It would save a lot of uncertainty before a visit to the framing shop :)


Agreed I think that would be a great addition to be able to select and customize different frames, I could see how it'd be valuable to see it framed next to other framed artwork already on the wall. It was a feature I decided to leave off for the initial launch for simplicity, but will prioritize adding it into a future release. Appreciate the suggestion! :)


Thanks!

Also, just curious… How does developing for ARKit work? Does the Xcode Simulator have “virtual rooms” to test your code in, or do you have to compile and run from a real device each time?


Unfortunately the app won't even compile on the simulator, it seems like some of the AR libraries aren't actually available when building for the simulator. I have to test on real device each time for the time being. That would be super cool if there were virtual rooms to test AR code in, I feel it could expedite some of the development time haha


Pro mode to allow anything other than 2-3 basic frame styles. “See your photo on the wall for free… match it to a frame size, color, mat size, color for $9.99”

Art costs at least $50 for something framed, so $10 for an app that can be re-used is great.

For true pros, I can’t think of what would be something that was subscription-valuablenorher than maybe $10/yr, which would be mostly fine for homeowners who have just moved, and would help have a sustaining income for “true pros” who would want to use it continuously.


I like this and I bet eventually it will be standard practice for galleries to have something like this. There's a lot of money in the art world, and especially since the pandemic not everyone can travel to every show even if they could afford it!

So here's my business advice: make this a white-label app and try to license it to Gagosian et al.

Now for a massive feature suggestion, probably not possible with ARKit: I want to record a room, then arrange pictures in it "AR-style" when I am not in the room anymore.

Because it's much more common for me to be where the picture is, and wonder how it would look in my room, than to be in the room and have the picture not there. (I make and also collect art.)


Thanks, worked very simple and fast. Great that you published this for free on the app store.

One small feature request: it would be useful to change between metric/imperial measurements. I don't live in the US and don't now how long an inch is.


Thank you, happy it worked for you :) yup that's a great call, adding support for different units is in the works!


Is there a reason your minimum targeted iOS version is iOS 15? Are you using any APIs/features that are only available in the latest iOS?


Why are we trying to “-fy” everything?


The suffix "-ify" means "to become, or to make someone or something become, something". So if you make an app that allows a photo to become a virtual canvas, "Canvify" is short, unique name that's at least semi-self-descriptive.


I understand the convention. And I don’t want to be too negative.

But it’s a really awkward sounding name. And in tech I see naming conventions in products that go in and out of fashion. ‘-ly’, ‘-fy’, ‘i-‘, ‘-(2 digit year)’.


Yeah I agree "Canvify" doesn't exactly roll off the tongue nicely. I probably could've spent more time brainstorming a better name, but mainly I wanted something unique and somewhat related to artwork/canvases. For better or for worse, it's such a unique name that this is the only app that shows up in the App Store when searching "canvify"!


You done good. NEVER let brainstorming and bikeshedding for the perfect name delay you from writing code for one nanosecond. (Just do that in your free time while you're taking a walk or riding a bike or sleeping, and rename it later.)

If your idea isn't any good without the perfect name, then you idea isn't any good.

That being said, how about Canv-O-Matic? No?


Yup agreed, I’m guilty of spending arguably wayy too much time brainstorming the “perfect” name for projects in the past and figured that time is better spent in development making the product better, worst case scenario I could probably rebrand down the line.

Now that you mention Canv-O-Matic, it’d be pretty unique and I like how it implies an automatic canvas!


It's inspired by the epic Super Bass-O-Matic '76 (Bicentennial Edition):

https://www.youtube.com/watch?v=jbIbyUdA_B4

And the Bat-O-Matic '77 (Halloween Edition):

https://www.youtube.com/watch?v=iKllBHvpZcE

And also the Super Bass-O-Matic 2150 (40th Anniversary Edition):

https://www.youtube.com/watch?v=xy3RMMUScOE

Dan Aykroyd Explains the Bass-O-Matic:

https://www.youtube.com/watch?v=c06HorsmhjY

Dan Aykroyd as Jimmy Carter explaining Letter Sorting Machines and Reassuring Panicky Teens on Acid:

https://www.youtube.com/watch?v=-68iTvhWNB0

(But I digress...)


You would have said the same thing about Spotify or even Google if they were released today. If Canvify becomes big it will feel just as natural.

Adding a suffix is an obvious way to differentiate yourself from the generic form of the word. Obnoxious complaint and strange thing to hold a grudge against.


At least he didn't call it "Canvasr"! The missing-penultimate-e-suffix is so tird, not wird.


Great job! My ultimate (and totally unrealistic) wish list:

Multitouch dragging and resizing and rotating and pushing and pulling images. Really cool if it works seamlessly with camera motion and rotation too. So you can correct the size and depth of misplaced pictures easily.

(The dragging gesture currently does, hurrya! Now "just" make the resizing and twisting and pulling and pushing gestures work that way by following camera motion too! Yes it's tricky code but it'll be worth it and satisfying.)

Bump the edges of dragging and resizing pictures up against the corners of the room. And when picture edges bump together they push each other around, with 2d physics (or use 3d physics with gravity oriented so the wall is a flat or graded floor). So you can't drag a picture into an orthogonal wall, or make pictures on the same wall overlap, and the wall corners and picture frames have more physicality, so you can easily align multiple images along a corner.

Scroll all the pictures on the wall at once by dragging the background like a virtual desktop that clips to the corners of the room.

Full depth 3d frames, that you can look around and see the back of.

Hinged frames that you can open this way or that to reveal a hidden safe (that you can open by twisting the dial to see what's inside) or window (that you can see through in parallax to another outdoor photo (or 3d scene with dancing characters) you've placed outside in the distance, like the a-ha "Take On Me" music video door AR demo).

https://www.youtube.com/watch?v=djV11Xbc914

https://techcrunch.com/2017/07/27/someone-made-the-take-on-m...

https://www.youtube.com/watch?v=gaEeRoBY8jY

Flat screen TVs for walls, big deep retro monitors for desktops, and big wooden Magnivox Color Television consoles with rounded screens for floors.

Sometimes the reception cuts out and you have to fiddle with the antenna to get the picture back. In-app purchase of a cable tv subscription to fix all your bad reception problems once and for all.

Simulating a backlighted screen solves the "uncanny gap" lighting problem by being emissive like a TV, instead of reflective like a painting. Well you still have to light the frame itself, but ARKit will tell you the light direction estimate, and it's ok if your frames have dark and shadowy edges and bright specular highlights, since it's not the actual picture you're trying to look at, just a frame.

Save and exchange scenes online!

Map (load and save over the net) your sets of pictures on one individual flat wall surface to other people's flat walls in different rooms, instead of trying to map between the entire 3d rooms, which probably won't match in size and shape.

So I could have one wall mapped back-and-forth with one person's wall, and another wall mapped back-and-forth with another person's wall.

Then stream live video in the pictures!


Love the suggestions and imagination with what’s possible, I think a lot of that is achievable using the existing technologies. I can see the future of AR, instead of buying a TV and a separate Netflix/content subscription, the content subscription would come with the ability to project its channels directly onto the walls using AR!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: