Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Wavepot – Digital audio workstation of the web (wavepot.com)
625 points by stagas on June 17, 2014 | hide | past | web | favorite | 132 comments

This isn't really a DAW. For those unfamiliar, a DAW is a fully-featured music creation environment including sound design, composition, recording, arrangement and mixing in one interface. Examples include logic, cubase and ableton.

While I suppose this does allow for all of those things, as much as any programming language with access to an audio API does, this would be better described as a web-based DSP livecoding environment.

Apart from the naming quibbles, it looks excellent! I wonder what's generating the sound? I'm aware of the oscillator/filter primitives in the HTML5 audio API from the minimoog google doodle, but this seems more elaborate than that.

EDIT: for fun times, load "need more 303," scroll down to the bottom, and change some of the numbers around. Setting the slide() call to 1/1024 yields a nice FM-ish sound. You can even overdrive the filter. Reach for the lasers!

It seems their goal is to make a full-fledged DAW in the browser, but they are not there yet (and far from it from what I can tell). How they would be able to fund all that for $2k is beyond me.

That said, it's a really clever idea to have all the source editable, since js is an interpreted language anyway. One can imagine a future where the community not only shares songs and sounds, but the DSP units themselves. Unlike desktop DAWs, which rely on dylibs (dlls) to supply external synths and effects, a js-based DAW can load, re-compile and hotswap units on the fly.

Realtime audio DSP seems like about the last thing that in-browser JavaScript was designed to do well.

Janky audio is unnacceptable in a way that janky visuals aren't. It totally kills the experience.

Without very strong guarantees about GC latency and thread scheduling that are neither available nor on the horizon for in-browser js, it won't work for anything beyond fun hacks.

Are you aware of the Web Audio API? It's an API designed especially to address those concerns, and achieves low latency and high performance where it is available (Chrome, Firefox and some versions of Safari currently).


Yes, and it works by mostly avoiding realtime DSP in javascript. It provides a high-level API for wiring together prebuilt audio processing nodes. Much like using browser animation APIs vs programatic animation. My point is about the feasibility of JS signal processing, not the feasibility of using JS to glue together signal processors.

The WebAudio API does provide a ScriptProcessorNode for JS processing, but it appears to suffer from the problems I described. The spec seems to warn against using this for realtime processing, fwiw.

"The WebAudio API does provide a ScriptProcessorNode for JS processing, but it appears to suffer from the problems I described."

Yup. That is being worked on by W3C and the WebAudio committee as we speak. WebWorker based JS processing is something being considered.



I am expecting this to be resolved in a few months. (hopefylly?)

If the threading issues are resolved, then real time audio rendering doesn't take that much processing power and can be easily done in JS.

Infact if there is nothing else contending for the EventLoop (like in this example of WavPot) the ScriptProcessorNode is able to meet the real-time guarantees pretty well, while still doing decent processing.

Also going forward, with Technologies like SIMD.js and asm.js more can be done within a single onprocess callback.

Meeting real-time guarantees pretty well isn't good enough for a DAW. Not without a lot of buffering (and therefore latency), anyway.

The WebWorker proposal is interesting. If the worker has its own heap and realtime scheduling with a correctly configured GC, it would be a big improvement.

Getting the different implementations to work consistently enough will be challenging, though. Meeting realtime deadlines is difficult enough when working in C with a known OS and audio stack.

We're working on a test suite for implementations, which will hopefully help with some of the basics, but it will be challenging to get full coverage.


If you don't care about latency, and in this "DSP sandbox" application you probably don't, JavaScript is fine for real-time DSP. JS under V8 and similar engines can be quite fast.

You aren't going to digitize 100 MHz of RF spectrum and build an SDR in JavaScript, but for audio rendering work it'd be fine. Cool hack.

I'm assuming that a full-fledged DAW cares a lot about latency. Live performance becomes difficult when latency creeps much above 5ms, for example. A lot of things that you normally don't think about (like when to allocate a buffer of memory) become critically important with low latency requirements.

That isn't an issue for the demoed app, but it is for the hypothetical longer-term goal under discussion.

Yeah, I think they're making a mistake by using the term DAW. That term already means something, and this isn't it.

"You probably don't care about latency" — famous last words...

Seems $2k is just one of their milestones...

"development is split into milestones on which the features are discussed and decided upon in the mailing list with the help of the community.

a funding campaign is then setup for each next milestone which supports development and keeps the project up and alive."

This kind of thing has existed in desktop/native form for some years. Look up supercollider.

& max/msp even included js but it operated at control rate, not audio, so you couldn't write synths directly as js

Max now have gen[1] so you can write audio/sample rate code.

[1]: http://cycling74.com/products/gen/

yea want to try it but i'm stuck on max 5... max upgrades are more expensive than they're worth & now its splitting into multiple products (max4live,gen) so i'm kindof on an every other generation plan unfortunately.

especially annoying when features you like get deprecated (pluggo, ms. pinky) though i understand gen & max4live both kinda serve as pluggo replacements. all in all its just hard for me to pull the trigger on a $250 upgrade for a few extra integration points on a tool that should be mostly open source anyway.


And I should point out that I personally feel kinda guilty... I met Miller Puckette and he was so enthusiastic about PureData and how much more the community could grow. I realize the biggest reason Max is thriving is because of marketing and now integration with Ableton.

I feel like if something were done to prevent Pd from being wiped off the map we could have had an open source community by now rather than Cycling 74. hm.... the media production world is complicated for open source tho.

> I feel like if something were done to prevent Pd from being wiped off the map we could have had an open source community by now rather than Cycling 74.

But we got another repeat of the scenario: the OSS project was unable to muster decent UI while the commercial variant was able to both fund the staff and draw out the needs from its user community to make continual UI improvements. (Speaking specifically to the vast improvements over time culminating in Max 6.) If I sound a bit testy here, it's because I've seen too many promising OSS projects killed by this same problem: terrible user experience.

> I realize the biggest reason Max is thriving is because of marketing

I strenuously disagree. It's because PD absolutely failed to deliver a sane user experience, period. Sure, marketing is important, but Cycling '74's work is what makes Max at all relevant today. About all that PD ever had going for it versus Max was being open source.

As someone who used Max/FTS back on the ISPW[1], even that old UI was vastly better than PD's. I enthusiastically gave PD a go when it first came out, even going back from time to time to check in on it. Every time I found it nigh unusable due to huge UI issues. Especially for what amounts to a visual programming language, this is the death knell. For comparison, it's not like early ISPW Max didn't have its pain points.

Max users owe Miller Puckette a substantial debt for his contributions to this excellent visual signal/event programming environment, so it's unfortunate that PD was never able to pull it together.

[1] https://en.wikipedia.org/wiki/ISPW

yeahhhh i mean i guess part of the "guilt" i was expressing is a regret that I was too young/inexperienced to be able to contribute and I too just handed money over to Max/MSP when they were still on version 4, which IIRC was not very different from Pd at the time.

I think when Max 5 introduced "Presentation Mode" I knew it was over, and then Max for Live was the nail in the coffin, but prior to that I'd seen most students using Max because they believed it was more feature-rich even though most of our professors were using Pd to do more complex work than my Max-toting peers. Similar to Matlab/Octave... I've seen a few Octave users outpacing Matlab users simply because they were free to experiment without having the roadblocks of toolkit purchases.

Cycling 74 has done a good job over time for sure I'm just saying that as recently as 5-10 years ago there wasn't such a clear dichotomy, and the release of Max for Live really struck me because it was a proprietary integration, huge departure from pluggo's VST philosophy.

I think I just have to get used to the fact that despite the great work being done on projects like Pd/Jack/Ardour, audio technology is probably drifting further away from OSS than towards it

It's tough, I want to contribute to to some audio projects but most of the ones that are gaining traction seem to be putting up paywalls or are too rooted in platform-specific code. I'm toying with some audio stuff on the JVM, will see how that goes....

There's a reasonably active community around PD, but it has the same problem as a lot of other open source software - it's hideously ugly. IIRC the user interface is done with TCL/TK. Given the large number of competing products in the same space, people who are not heavily committed to open source have little incentive to use it, unfortunately. I'd say a visual/UI makeover is a higher priority right now than extending the PD library or other tasks.

PD and Max-MSP are from the same stable, and PD is the poor man's Max-MSP. I feel that PD is an excellent learning tool if you want to learn about synthesis or DSP more generally, but as a musician's tool it's just not ready. A lot of the music made with PD is pure crap, or even a type of anti-music, or puts forward some kind of alternate musical theory of its own which nobody can understand. PD music is usually too detached from normal conventions such as 12 TET, harmonies and progressions to be listenable. Typically the PD user will produce something with weird bell sounds, stuttering percussive noises or massive pad washes going in and out. Not conventional music. The reason is that PD doesn't provide enough built-in function shortcuts. If you want to make music, PD is a practically useless interface. By the time you've programmed a very basic synth, any inspiration has long gone. An instrument is something that should need no foreplay beyond just turning it on and picking it up, before actually playing it. If you want to make something recognizable as music with PD, you will need to first download someone else's synth patch or copy a subtractive or FM synth from a book first. And then you'll find it has no patches you can modify (unless you make them yourself). The time between turning on the computer, setting up your synth and playing your piece is simply too long.

Sharing Supercollider snippets was a thing for awhile on twitter. THE FUTURE IS NOW.

>a js-based DAW can load, re-compile and hotswap units on the fly.

REAPER does just this. It has its own audio scripting language, JesuSonic (also abbreviated JS, which can get confusing), which is interpreted. It's not incredibly popular, but people do share code and modules for it. There are some who sell VSTs/AUs/etc (dlls), but give away the same plugin in a JS format in order to support Reaper.

In my head, a DAW is exactly what it says. A workstation for digital audio. That said, this has the potential to evolve to something more similar to existing professional DAWs and maybe even surpass them.

I'm glad you like it. The sound is generated purely by a JavaScript function returning sample values that are then handled by the Web Audio API and buffered. That's how you hear sound.

On edit: It's really easy to remix tracks that way, even small changes to values yield quite different results. It's fun!

>I'm glad you like it. The sound is generated purely by a JavaScript function returning sample values

i feel like this belongs here: https://www.youtube.com/watch?v=tCRPUv8V22o - not in a DAW kinda sense, though

Have you looked at the work of Bret Victor? Integrating some of his concepts for realtime feedback would be awesome (see http://worrydream.com/#!/LearnableProgramming and http://worrydream.com/#!2/LadderOfAbstraction)

This is awesome!

I'll certainly contribute to this. Will the source be opened up?

Here's a delay filter:

  var delay_array = [];
  var delay_length = 5;
  var delay_feedback = 0.6;
  var delay_volume = 0.4;

  function delay(x) {

    y = 0;
    if (delay_array.length < (delay_length * 1000)) {
    } else {
      y = delay_array.shift();
      delay_array.push(x + (y*delay_feedback));  

    return (x) + ( y * delay_volume );
Include this in the DSP return code, ie;

  return delay(synth);

Yes that's a good delay filter but it's also a good example why this type of user-programmable synth apparatus, which looks similar to CSound, won't catch on. There is much more going on in the typical delay sound itself than just a delayed iteration of a sample. A pure delay is boring. There is the possibility to model all kinds of analog and hardware digital delays with additional coding, but people have been doing that for years and it's no surprise really that commercial companies do it best (and they won't bother unless there's a way to protect proprietary code such as VST).

There's more to digital audio processing than code? I don't get it. Once the collaborative module system is in place (see milestone I) it'll allow for more complex stuff, as you'd be joining components and moving upwards in the abstraction levels, I don't see how big companies can compete with that.

No that's not the point I was making. I was saying that to make a good delay in software it can't just be a delay. To sound good, it needs additional algorithms to carry out at least 2 other DSP functions on the return signal.

> I'm aware of the oscillator/filter primitives in the HTML5 audio API from the minimoog google doodle, but this seems more elaborate than that.

An oscillator and a filter modulator are pretty much all you need to create this particular app. There's no reverb, and any delay effects are being "hard-coded" with simple degrading velocities. All synthesizers are built from these ultra-simple components, maybe 2 or 3 more and you have everything you need to build a digital synth purely with web technologies.

Yeah, it's definitely cool for what it is -- but it is NOT a DAW.

I would call it the audio equivalent of OpenSCAD.

just saw this reply, also thought same thing!


which controls are usable? everything i click just gives me an annoying "gimme $$" popup, haven't actually found a function that works

I just loaded some example projects, hit the play button and started changing the code. The output responded immediately.

ah yah for some reason mine showed a blank file & couldnt expand projects. refreshed & its good, this is a pretty awesome project

This is great! I teach high school math and science, and students often ask how sound waves can be turned into music. I give them a big-picture overview of sine waves and superposition, but I've never had a tool that lets students play with the math easily.

I look forward to showing this to students in the fall.

You might also consider showing the Pure Data. The intro documentation/tutorial stuff is quite good at explaining the basics of sound synthesis.

A scene has sprung up around music based on algorithms, called Algorave http://en.wikipedia.org/wiki/Algorave

Vice did an article on the subject a few months back which gives an overview of the scene (from a layman's view) http://www.vice.com/en_uk/read/algorave-is-the-future-of-dan...

> it is quite contrary to normal raves due to the fact that the music can be very stop-and-start and the ingestion of MDMA, ketamine or other illicit drugs does not usually happen

- wikipedia

Oh, this is so good, thanks for posting the link to Vice! Very much Oval-like performance, super awesome!

This is really, really interesting.

come and do some wavepot at an algorave

I'd love to! You guys have some amazing things going on there. I have some live performance features planned for wavepot, so yeah, not quite there yet! But keep an eye on it I'm sure you'll enjoy what I have in mind.

That was fun. Here is my detuned remix of "on the verge tech mix" https://gist.github.com/jchris/04585454570853297974

I like it! It is cool to see not just the demo projects, but people actually making something with it :)


For another take on live coding in the browser: http://gibber.mat.ucsb.edu/

Check out http://algorave.com for more info on the live coding music scene, which has really taken off in the last year.

I can't wait for someone to combine live coding with Swift Playground/Bret Victor 'revealing the system' style interfaces.

"I can't wait for someone to combine live coding with Swift Playground/Bret Victor 'revealing the system' style interfaces."


Let's build it!

There's also http://toplap.org/

If you like this, you might also be interested in a newsletter I write called Web Audio Weekly. I link to interesting projects that use the Web Audio and Web MIDI APIs as well as more general stuff of interest to musicians and developers.


This is such a great example of a great experience changing behavior. I haven't thought of making music in years, and I've seen other projects kinda similar to this in the past- but this reacts so quickly, so immediate to my inputs, that it's hard to step away from.

You've really got a great product here.

Wavepot team: really awesome stuff! love it.

At Bandhub ( http://bandhub.us ) we are looking for ways to integrate programmed music into our web DAW.

If you guys are up for a collaboration or want to discuss ideas, ping me osi (at) getbandhub (dot) com

Oh wow, I had this idea a few years ago and put together a handful of crap prototypes. Stoked someone is doing it. Well done, I love it!

Cool site. You need more genre categories however; Synth and/or Electronic would be a good start since you are missing anything along those lines. Probably a EDM category as well otherwise that's what either electronic and/or synth will end up being.

Also, Find Musicians doesn't seem to be working, at least not in Chrome on osx.

what do you see? can you send me the console output too?

When it's spinning looking for musician's the console says:

GET http://bandhub.us/discoverUsersList?genres=&instruments=&col... 400 (Bad Request)

Cool site, can't wait to check it out. Would love to see deeper support for programming and mastering

we do have some mixing support in beta - ping me if you want early access osi (at) getbandhub (dot) com

re: programming - we are trying to figure out what's the best model. One thing we are trying to avoid is complex timelines to make it easy for the average recreational musician.

Seems similar to http://studio.substack.net/

Wow.. So is Wavepot.com by substack too?


This is really cool, really nice level of abstraction.

The 303 example in the environments in which I learned computer music fundamentals (PD, Max, CSound, Supercollider) would require either much more or less understanding and/or 3rd party implementations of components to generate something similar. Most importantly the sound is really nice.

Btw the categorisation of it being a DAW is dependent on the user perspective, I've created what I would consider to be DAW equivalents in Max/Pure Data, that facilitate audio/midi recording and playback; just because a toolkit doesn't do stuff for you automatically doesn't mean it isn't capable. DAW in my opinion is just a marketing term not a descriptive one.

Thanks! I have background in Music Technology but I'm just starting with DSP myself. Most of the stuff I use are ported from code I found online so I can't really take credit for those though.

A modular piece of software like this can achieve pretty much the same as professional DAWs. Consider modules that add UI elements, automations, etc. It won't be long until the abstraction is high enough so everyone just assembles their own custom DAWs on the fly.

You can help the Wavepot project: Contributors can join the discussion and mailing list here: https://groups.google.com/forum/#!forum/wavepot

They also have a funding campaign (accepting paypal) to finish this project that will include modules like Oscillators, Sequencers and:

Effects: Amp, Chorus, Delay, Dynamics, Eq, Filter, Flanger, Modulation, Phaser, Reverb.

Synths: Ambient, Analog, Bass, Drums, Flute, FM, FX, Modular, Organ, Pads, Percussion, Piano, Samples and Strings.

Just want to clarify that the modules feature is the ability to create them and import them like this:

  import DiodeFilter from 'filters';
  var filter = new DiodeFilter;

Is there a way to contribute without using bitcoin? This is awesome and I want to support it, but I'm not really interested in getting involved in bitcoin.

Paypal donations have been added to the crowdfund. Thank you people, I appreciate your enthusiasm, this is a project I am in love as well and I wanna see it going places.

This is really great. Reminds me of: https://github.com/overtone/overtone


Extended the melody & progression a bit. I'm so proud, this is awesome!

It goes into loop when I want to switch a file "You've made some edits!"


yntaxError: Unexpected token * at Function (native) at t (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:2:16401) at Function.<anonymous> (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:2:16779) at Function.i.emit (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:1:2656) at DedicatedWorkerGlobalScope.i.isMaster.self.onmessage (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:2:12723)

As a heavy Ableton user and coder I would like to have a coding environment that also has powerful gui libraries so I can quickly implement faders and knobs and stuff when needed but also be able to change code on the fly. Kind of get the best of both worlds for the ultimate hacking music environment. This is something similar and very interesting but for visual stuff based on openFrameworks: https://media.usfca.edu/app/plugin/embed.aspx?ID=mjWcGbE4DUK...

Yeah, I'd really like to see a DAW that incorporates SuperCollider in a transparent way along with traditional audio & MIDI tracks, a bit like how Max is integrated with Ableton.

That's about what I have in mind, picking the bits and pieces you need to create your custom audio workstations in a modular and hackable manner.

Hey, Hey, Hey! I made something with it!


It's the melody of the song 'Popcorn'. While this might not be special, I implemented a first draft of a (going-to-be) multi-track sequencer. </shameless self plug>

Whatcha think?

What a great idea. I want to make synthesized sound effects for a game I'm making, and building them algorithmically in Javascript would be great. My alternatives have involved twiddling a zillion predefined knobs in a overwhelming UI, but as a developer this is far more appealing.

If only I could export the generated sound!

Uncompressed code export with convenience methods like .play() is planned for milestone I. This exact use-case is what I had in mind.

This is amazing, but I wish there was a clear way of knowing when there is an error causing your changes to not be output.

I fiddled for a couple of minutes without any changes before realizing i had forgotten to initialize a variable. (the x checkmarks don't seem to work for unassignad variables..)

A debug console is planned for milestone I that will help for those issues.

Hi stagas, great job on wavepot! Is there any way we could get in touch off of this thread? My startup company is working on a very similar idea, and I think it would be great to know each other. If you'd like I can send write down my email onto this thread. Hope to talk soon!

This is really, really impressive. Cool range of sounds. Have you thought about attaching a user-friendly, knobby interface to the modules? I could see it as a really easy way for non-programming musicians to learn how to use this and other music generation systems like it.

Thanks! That's a good idea, I have it planned for a next milestone as it is not one of the immediate priorities.

As someone who's completely ignorant to the concepts behind algorave, could anyone more experienced outline the helpful prerequisites to know before I experiment with a tool like Wavepot?

I assume it's mostly the (basic?) physics of sound waves that I'd need to understand?

You don't need to know anything about the physics of sound or programming, to get algoraving with Tidal or Ixi Lang



Made a gist with some of the scripts people have been making.. Add yours to the comments and I'll curate the list.


I'd love to help develop this (and experiment myself!). Is it open source?

If so, where can I find the repo?

Slightly related question: Are there any courses I can take specifically aimed at making music by programming? Lynda or similar preferred, articles / books also work as long as they have examples using some sort of software (not just algorithms)

Unfortunately the course materials aren't available any more, but you should watch for the next offering of this course by the California Institutes of the Arts on Coursera: https://www.coursera.org/course/chuck101

This looks way more user-friendly than stuff like CSound. Would be awesome to see a real DAW-like GUI built on top of it...I've always wanted a DAW that would let me flip between GUI editing and code.

If you could use any help coding, please let me know.

seems like this is really a window for executing javascript, no? where's the DAW?

I love it! I want to see something like this for the iPad. I could also imagine a Soundcloud + Spotify + Github hybrid playlist site where you could actually present your creation for people to play in a playlist format.

can you tell more about how you coded the fundraiser modal? is it just checking the balances on those addresses then converting them to USD? im sure its a simple solution but i'd be happy to hear more about it - i was thinking of making a similar one for my own project, but it would be good if it displayed different addresses per visitor for better obfuscation...lets say have half of a wallet full of addresses, then also check server side whether the displayed is a valid one

[edit: using something like coinbase would solve this, but how could you work it out without a 3rd party and any fees?]

It is using 3rd party APIs server-side to check wallets' balances, converts doge to btc and then converts the sum of btc to USD. I like the obfuscation idea but I'm not sure it's worth the complexity in this particular use-case.

Looks like a modern (in-browser) take on CSound[1]. I love the idea of modifying the code during runtime.

[1] http://www.csounds.com/

this is really cool... it is really hard to see how the math ends up making music though. I almost would have an example of a scale or a 4 beat measure. very interesting stuff.

You should also checkout - http://www.audiotool.com/ Its a DAW, but its more of a chrome plugin.

Also, they're already on their way to migrate to HTML5!

If you like this and have an iPad you should also check out Bitwiz: http://kymatica.com/bitwiz

Can't wait to go to the next Girl Talk show where instead of DJ pressing space bar to play, they press F5 to "Compile and Run..."

Making music with code seems pretty neat, perhaps I'll learn to play an instrument after all some day (that being the instrument of code)!

This could be really useful for prototyping realtime sfx for games.

I am floored by how well it runs on my Moto G, specially the 303 and the subwah patches.

Great job!

Another more DAW-esque pure HTML / js DAW in the browser: http://hya.io

Cool! Reminds me of ChucK[1].

[1] http://chuck.cs.princeton.edu/

Does not work in IE, but does in Firefox and Chrome. Is it just me or the application does not actually work in IE?

This is really cool. Anyone know of any good tutorials for learning how to make sounds like this?

This is amazing - I had loads of fun playing with this for a couple of minutes - thanks!

I could easily see this become the future of music production. Lovely UI too.

I just want someone to put together a "hypnotoad" sample.

added a filter envelope to "got some 303" and now it sounds much more like a 303 :)

source code here:


using super for the keyboard shortcuts is problematic, at least for me - super-enter for me opens a new terminal, and that appears to take precedence over the browser shortcut

it's probably me, but I get nothing... - ff (latest beta) : waveform thingy works, no sound - chrome (latest beta) : no waveform, no sound - IE (11) : nothing

Linux Mint 14 (desktop) and CyanogenMod 10.2.1-flo (mobile, mod based on Android 4.3.1) user here.

For Mint: Firefox 29 - everything works; Chromium 34 - only grey background

For CyanogenMod: Chrome 35 - everything works; Dolphin 10.3.1 - only grey background

For me, using FF (various versions) and IE: Just a black background. It worked yesterday - they must be updating something.

reminds me of http://www.openscad.org/

what language is this ? i see it is not php ...

This is perfect!


Very good work, I like this a lot.

Holy crap! This is great!

Would someone kindly explain why I and two other users (msane and jrlocke) were downvoted for simply expressing delight? Is that somehow unwelcome? I don't see anything against that in the guidelines: http://ycombinator.com/newsguidelines.html

Note, this isn't a complaint, it's merely a request to understand why my comment is inappropriate and to understand how I can express my approval of a submission without losing karma.

Usually, when someone bothers to tell you, it's because the comment 'doesn't contribute to the dialog' or some such reason. Aka, too obvious or perhaps seems pandering for +karma type thing.

Anyway, I didn't downvote you btw, just my 2 cents from what I've read in the past.

It's not against the guidelines, but it doesn't create conversation. Many people find that fairly useless, and prefer comments with substance that advance the community.

The best way to express your approval of a submission? Upvote it.

>The best way to express your approval of a submission? Upvote it. //

Except with hidden vote scores it doesn't express anything at all (except changing a number on the OP's interface; the OP doesn't even know that you upvoted them).

If you want to express your approval to anyone other than the parent/OP or have anyone know who is expressing approval [that is occasionally relevant] then you have to comment.

I once thanked someone for submitting something interesting after their article had been up for a few hours and nobody commented. Someone downvoted me and took the time to condescendingly explain why my comment was worthless. I really don't understand that attitude. :|

If you had simply said why you liked it, you probably would have been OK. That at least adds something.



This is dope

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact