I have a space of about 2000 dimensions. Each coordinate is an integer ranging from 0 to 1000. I have a complex "score" function which I'd like to minimise, and I know very little (I originally said "nothing" - that was a mistake) about its characteristics.
The following was added as further clarification:
The scoring function is probably mostly "continuous"-ish. Small changes in the coordinates probably lead to small changes in the score, although there will be areas (hyper-volumes) where this isn't entirely true. It is possible to use non-integer coordinates for exploration, and the evaluation/scroing function is probably reasonably well behaved when doing this.
Hill-climbing/descending techniques give moderate results, but there are lots of local minima. Some local minima have wide catchment areas (hyper-volumes) but are not particularly extreme. Local gradients are mostly present - there aren't likely to be huge swathes that are effectively "flat".
A problem like this needs an human analysis. I'm more interested in small & standard optimization problem that could be resolved in real-time by a normal-server (Quadcore), and I want to provide to consumer an efficient way to interact with solution via XML/JSON, to build web apps
No, it doesn't depend on human analysis. It depends on the algorithms you use for solving it. E.g. Calculate one solution. Variate coordinate. Calculate again. If better optimum, mark this as top choice for now. Calculate the improvement to past value. Iterate until improvement change is negligible or answer is good enough sufficiently.
Of course, this can take a lot of time and CPU power (even too much), and can end up in a local minimum if you don't variate the starting point and coordinates enough or include some random changes as well so that the whole ensemble space is well covered. This may (and sometimes strongly does) also depend on the random number generator's biases and it's quality of true randomness that you use in variating the coordinates.
You can keep a list of several local minima, maybe even see to it so that they are not "close" to each other (or equal to each other) when the algorithm goes through the area, so that it avoids already known local minima and tries to just seek more.
Depends if you can calculate the gradient and whether it it's known and defined. So, this all depends on the function in question. And also, the words "perfect accuracy" begs a definition here. How many decimals are we talking about here, or are we talking of all the "optimal solutions" or just one optimal solution, etc.
I am also into same optimization models which are provided as a webservices. I can give a rough idea like mine
A model for NFL match prediction: my client(s) sends data in XML/json format (it may be about club,player etc), my model processes the data and sends back the solution again in XML/JSON format which he can show as a betting/prediction..watever he wants..
The following was added as further clarification:
The scoring function is probably mostly "continuous"-ish. Small changes in the coordinates probably lead to small changes in the score, although there will be areas (hyper-volumes) where this isn't entirely true. It is possible to use non-integer coordinates for exploration, and the evaluation/scroing function is probably reasonably well behaved when doing this.
Hill-climbing/descending techniques give moderate results, but there are lots of local minima. Some local minima have wide catchment areas (hyper-volumes) but are not particularly extreme. Local gradients are mostly present - there aren't likely to be huge swathes that are effectively "flat".