This is an extremely complicated question. I run a UI/UX studio here in Toronto and we go all up and down the spectrum, with some clients being shocked at our prices for web design/development but we've also lost jobs from other prospective clients because our prices were too low (no joke).
It depends on two things:
1. Where you get your business
2. Who your competition is
For many, #2 depends on #1. We see three main sources of business: Google (we rank highly for 'Toronto web design'), Dribbble (we have the 2nd most followers in Toronto, and 8th in Canada), and of course, referrals.
Dribbble leaves us with clients with the biggest budgets, who aren't afraid of 5-figure prices for web design. This is where we're competing with other high-end designers and thus our prices are more in-line with theirs.
Referrals are the strongest leads, with clients who when they don't have larger budgets are more comfortable stretching a bit because of our strong work for a friend or mutual acquaintance (and we're often happy to mark down a bit for a referral).
Leads from Google are where the clients who get really shocked come in; they don't really understand or appreciate the work that needs to go into good, iterative, considered UI/UX (and the custom development that goes along with it). The competition here includes a swath of companies who game SEO and outsource work offshore, and they can charge considerably less. Thus, our prices look unreasonably higher to the untrained eye.
There's also a phenomenon where cheaper clients expect more. If you move up in terms of positioning yourself as premium, better clients will consider you and your services-- the types of clients who aren't shocked by "high prices" for design and/or development.
So, if you find all of your clients are shocked at your prices, you're not marketing towards the right clients.
I'am a self entrepreneur and I do some web designing from time to time. I recently got a lead for a organization which had EU funding of 250'000€. They wanted me to roll two sites with independent designs and features which's content had to be editable. I offered them my full consultation in choosing everything from ground up, but they kept on asking for the price. Once I told them I never heard anything back.
I offered to do the sites for about 0.5% stake of what they got from EU, but they apparently thought it was too much.
Recently a friend of mine was observing me coding and he was astonished of the complexity of it. As he said, 'For me, a button has always been just a button with no further functionality.' I also let him to change the source code as he was interested to see what effects it has on the program. He deleted a single bracket and was amazed to find that the program failed to run. I then continued to tell him that no matter how big your codebase is, a single mistake such as that might screw it all.
I didn't know about this. Still, I was expecting a little longer of a communication with support-- 'escalation' or at least 'the ask' should have been done before the public shaming, in my mind. All I see now is a cordial and polite explanation from Instacart.
Lets get real: outside of two $8 charges that he's not even asked to be reversed, this is exactly the same service that he describes as "great" in the same post.
Furthermore, 'designers' and HN users should care because LayerVault runs a 'Hacker News' for designers at https://news.layervault.com/. LayerVault is not the right entity to run this site after these DMCA shenanigans (if in fact no infringement has taken place).
The rebuttal is a DMCA Counter-Notification. They're easy to write and just asset that you do in fact own the copyright, or are otherwise authorised the use the materials that the DMCA notice referred to. Once that's been done, the host has to restore the removed content within a certain time period and the issuers of the original notice have to take action directly against you (as in sue you) if they want to pursue the issue further.
Before we start yelling about "abuse of process" though, we might want to wait for more information about what's actually going on here. It would hardly be the first time one small company has used another company's assets without permission, so perhaps we should see what LayerVault has to say before getting so worked up.
LayerVault has posted details on what they felt was infringing and appears to misunderstand (willfully or otherwise) the purpose of a DMCA notice. There is no evidence that artwork identical to that of LayerVault has been used in FlatUI. While some artwork is similar, that is another issue entirely, an issue outside that of the DMCA.
I think this is another case of "Failures were justified and assigned an appropriate cause". In reality, the author didn't have the discipline to balance work and entertainment. This sounds like tantamount lazyness to me; who doesn't want to be having "fun" all the time instead of work or school?
I have difficulty putting it into words but this sounds like one big excuse blaming society/parents/school for his failures rather than himself, which is where they lie.
It's hard to expect him (looking at the child growing up, and the kid in college less than the adult) to know how to balance work and entertainment when he was never taught these things.
Discipline isn't something that you are born with, and if nobody is there to teach it to you then you have to teach it to yourself. It sounds to me like the author is beginning to learn his lessons, but that doesn't change the failures of his childhood parents/mentors who clearly did not adequately prepare him for college.
One of the reasons that I like college as an institution though is that it is a 'safe' place for you to learn the gaps in your childhood education. It's more or less a safe haven for you to finally be on your own but with still lessened consequences.
When you always have a group of people supporting you (like your parents), it's difficult to realize that you lack discipline, and it's difficult to realize the full consequences of your laziness.
I reject the notion that a child needs to have great parents in order to recognize basic cause and effect.
In my opinion, we need to stop perpetuating this idea that children are some kind of tabula rasa that must be filled with knowledge from their parents, as if to say that children are incapable of figuring things out for themselves. This thought process is what the excuse makers in life thrive on.
In more juvenile terms, I'd put it as "Oh, I'm sorry, let me call the WAAAAAAAAHmbulance".
So far I have met very, very few people who didn't have some sort of mental breakdown during their 20s. Attributing it to "youthful idealism", or what have you, sounds rather disingenuous. Also, there are definite problems with the reverse idea of "you're not special, you're just a cog in a machine" -- namely that once you think that way about yourself, you also start thinking that way about other people and then you're thinking of people as things.
Besides, you are special. You're just not more special than the guy next to you.
Even if they aren't given to children I don't think they set off the "hey this is really dangerous" alarm for a lot of people. Ideally an adult would play with these then count them all to make sure none are missing and store them out of reach of a child -- every single time with no mistakes or exceptions. Requiring that level of care kind of moves these out of the "toy" category in my mind.
I have a set too, and they are cool. I let my six-year-old play with them, because I trust her. She knows not to let her friends play with them. In the light of this suit, I'll have to reconsider that.
They are really easy to lose, and if lost, they can be picked up and swallowed by a kid, and their parent would never know. The particular danger of strong, tiny magnets is quite non-obvious (inspecting the comments here makes that clear), even spooky.
They are commonly sold at science centers, near children's toys, another indication that the marketing was done to kids, and not as just a desk toy.
I've been dreading the day one of the ones I've lost gets sucked into the vacuum...no telling how much damage it could do if it sticks onto a gear.
> I have a set too, and they are cool. I let my six-year-old play with them, because I trust her.
kids are random. much more so than adults. I strongly recommend you never let her access those things again. there's only downside, and the upside (fun, diversion, learning about magnets) can be had from alternate things, or at some future date when her judgment is better. plus give something to a kid, and it could easily end up in the hands/mouth of other kids. kids are random. they/we start life in fact with our heads full of all sorts of irrational/mythical beliefs, and a blurry line between imagination and reality. For example, the classic kid thought meme: if something looks like candy, it probably tastes like candy, it probably is candy.
Woah, this totally reminded me of the time when i ate a brufen tablet when i was little thinking that it was candy. It even had the sugar coating. So basically kids are usually dumb and prone to random acts of craziness .
I think this was the lowest-hanging fruit in Rails. Glad they've finally addressed it in core. Years of people using job queues just to send mail is finally over!
But, why doesn't this let me pass objects to Mailer methods? I can't help but think passing ids and finding the object again on the other side is bad design.
Myself, over the past few years, instead of a job queue, I have been using a simple gem called spawn (https://github.com/tra/spawn) to fork processes to asynchronously send mail from Rails. It lets me pass objects around... I hope that this asynchronous ActionMailer gets this functionality as well!
I think the main reason for this is that strange things can happen when you're passing marshalled objects around. For instance, say that your application has just created an Email object which has a "title" attribute, and it has been marshalled and put it into a queue. Your queue is a bit full, so it takes a while for the email to be processed. Right after it has been enqueued, you roll out an update to your code, changing the "title" field to be called "subject". As soon as your Email object is pulled from the queue, you will get NoMethodErrors from your asynchronous mailer code because the unmarshalled object doesn't have a "subject" method yet.
The reason queueing systems like Resque and Sidekiq pass id's around (instead of objects) is to make sure that the asynchronous worker always has a fresh object straigh from the data store. I guess that's why Rails core has decided to implement it like this as well.
> I can't help but think passing ids and finding the object again on the other side is bad design.
It is good design. Why? Because you don't want to pass mutable state (read: your user model) across thread/process boundaries if you can help it. What if the mailer modifies it? What if the caller modifies it while the mailer is working with it? It may no longer be valid. Rather than trying to reason about all possible states it could be in, it's easier just to make the job bootstrap itself with all the data it needs.
Confining instances to threads is far more sane than using mutexes and locks or relying on your runtime's GIL to watch out for you.
Since you're unlikely to be sharing memory with your background processor, what you'd really be doing is marshaling the object on enqueue and unmarshaling on decode. In that case, you're no more susceptible to further modifications to the data than you would be with a DB lookup.
There of course may be other issues with marshaling. E.g., marshaling of procs is particularly tricky. But I've been marshaling with sidekiq for the past couple months and really haven't run into any problems. And it cleaned up the code a fair bit because I basically treat it as an RPC without any of the ceremonious DB lookups at the start of every async method.
I know people have largely moved to using remote services to send their mail. However, if you're about to introduce a queue only for that then a local MTA seems a more sensible choice. A minimal postfix satellite config is 12 lines - not rocket science and robust as a brick.
I currently us a combo because some of our emails are reports that take a bit to compile. So, the async method makes sense there. But every one of our nodes has a local postfix config and transport up to SendGrid. I trust Postfix will do a better job dealing with temporary outages than I will in Ruby.
I'd imagine this is great for those on PaaS where they can't just install local software like an MTA, too.
With the default Rails.queue you can pass objects to your mailer. The issue is when you start using a background processor that requires the marshalling of the object. It is recommended to not use complex objects but if you are confident that the objects you are passing will marshall properly you should be OK
Nintendo didn't develop the WiiU for the new controller, I would imagine the tablet controller would work on the current Wii just fine. But it's time for a console refresh so Nintendo might as well toss in a new gizmo to entice people to upgrade. I expect MS and Sony to do something similar although I would imagine this feature will carry over to the new MS console since it doesn't require new hardware.
Well it depends on execution though. Just 'having' a feature isn't the whole story. For some reason I've the feeling Nintendo is gonna come up with some fun, innovative concepts for their system, that make it appear novel and interesting, while Microsoft's implementation looks forced on and unnecessary. At least that's how it seemed with the Wii vs. Kinect.
We contacted 37signals when they originally announced their intention to sell Sortfolio. Having bought and sold sites before we asked for some pretty standard information about traffic, revenue, etc. These questions weren't answered and we did not pursue the matter further.
I'm not surprised the site wasn't sold; the "business" of buying and selling sites isn't exactly very "37signal"-y.
With a price now available it's much more transparent for everyone involved; I hope though that this time they are able to accommodate due diligence.
Do you have (and are willing to share) the credit card information on file for each customer? Or is the new Sortfolio owner expected to contact or otherwise prompt existing customers to re-enter their payment information?
Rails needs to do a release solely focused on speed. Dropping 1.8.7 is a good step for a lot of people, but I feel Rails itself has gotten slower since 2.x and I know a lot of people would agree with me.
I do a lot of work in CodeIgniter as well as Rails (w/ Ruby 1.9.2 in production), and there's no doubt that my CodeIgniter apps on slower servers with little-to-no deliberate optimization are faster than my Rails apps.
For just one particular example, I have found rendering partials in Rails are such a point of poor performance that I have often found myself avoiding it.
There's this red herring (and a pet peeve) that making the framework 'less bloated' by dropping components or making them optional equals performance. You'll even see a comment on this blog post asking which components to remove to improve performance! It doesn't work that way when the core components of Rails remain slow.
While the speed of Rails had Ruby 1.8.7 to blame for a long time, now that it's being dropped I think 4.0 has a unique opportunity to optimize for 1.9 and make a big difference to the overall speed of the framework.
As the number of Rack middleware pieces in a base Rails app grows, it gets slower. MRI and YARV spend a lot more time doing GC as the stack depth grows. Aaron Patterson mentioned that he was working on a modification to the Rack API where each piece of middleware did not call the next piece, reducing the stack depth and drastically improving the runtime performance of Rails apps.
> For just one particular example, I have found rendering partials in Rails are such a point of poor performance that I have often found myself avoiding it.
A good caching plan obviously helps with these kinds of issues.
> There's this red herring (and a pet peeve) that making the framework 'less bloated' by dropping components or making them optional equals performance.
In the case of Controllers, inheriting from ActionController::Metal and not including unnecessary modules, speeds things up a lot.
In general Rails may be slow in comparison to other frameworks. But that doesn't mean it's too slow. I don't really care if my app only manages 5000 requests per second, versus framework x's 6000 or more on the same hardware.
I don't really care if my app only manages 5000 requests per second, versus framework x's 6000 or more on the same hardware.
Nitpick: Your numbers are a little off. Rails/REE tops out at around ~300 reqs/sec on a current commodity box (16 cores/16G). This is of course application-dependent and optimization can squeeze it some, but it's a different ballpark.
When you look outside of ruby-land you can indeed find frameworks that will handle your 6000/sec on the same hardware (e.g. twisted, node, some of the evented java-frameworks).
So, depending on what you compare to rails, the difference can easily be an order of magnitude.
When you use Metal endpoints, as i mentioned in my comment, 6000 requests per second is actually very possible. Yes, you need a lot of resources, but I didn't suggest otherwise. And yes, on this hardware other frameworks will manage even more requests.
My low budget netbook just gave me 327 req/s with an out of the box Rails config, and running the benchmarking tool on the same machine.
> A good caching plan obviously helps with these kinds of issues.
Caching is suggested far too often and too frequently in the Rails world. The solution to "the default tools for template abstraction are too slow to use" is not "use a difficult-to-get-correct system with concerns that cut across the entire project."
As someone else mentioned, Rails is much slower than other frameworks. It does an order of magnitude less than 5000 req/s. Obviously you can just scale it out and load balance, but that suffers increasing costs from an operations perspective. Aside from per-machine cost, there are peaks where adding more servers requires a big change in your management techniques. Those costs shouldn't be trivially dismissed, and if we can look at reworking our tools to keep most (ideally, all) of the productivity but save on the utilization (read: management overhead), how is that a bad thing?
> Caching is suggested far too often and too frequently in the Rails world. The solution to "the default tools for template abstraction are too slow to use" is not "use a difficult-to-get-correct system with concerns that cut across the entire project."
I'm not selling caching as a silver bullet (and I'm not saying it's trivial to implement, either). It's just something a lot of people either don't use at all, or just get wrong. And surely, if something can be cached, it should be.
> As someone else mentioned, Rails is much slower than other frameworks. It does an order of magnitude less than 5000 req/s
With Metal endpoints, proper configuration and lot's of resources, this is very possible.
On a pretty large rails app, we switched to 1.9.2 about a year ago. Startup time for our app is much, much worse than it was under ree, but our average response time significantly improved -- we saw close to a 50% drop in the time spent in ruby code across nearly all requests. I wouldn't use slow startup time as a reason to avoid benchmarking your app in production mode against 1.9.x, since it could result in a huge perf boost across the whole app.