Hacker News new | comments | show | ask | jobs | submit | michielvoo's comments login

Yeah, I prefer not to use the built in Visual Studio project templates. Best to always use bin deployed binaries explicitly installed from NuGet packages. In the future (later this year) we can also bin deploy the runtime (.NET Core) and base class library (Core FX).

-----


"we removed the row of LEDs and the light guide panel that distributed light throughout the keyboard and instead placed an individual LED under each key."

I'm looking forward to seeing creative hacks based on this...

-----


It supports both 16-bit and 8-bit mode. I find it quite annoying to have to keep track of which mode a register is in.

-----


The STM to DMA registers (using a single CPU instruction to trigger DMA) seems like a valid optimization, unrelated to piracy prevention.

-----


It certainly seems valid, but in context of all of the other protections it's difficult to say for sure if is or not.

But considering it either hasn't been used in other games, or at least infrequently enough this emulators author has never seen it before, I'm leaning towards it being another protection.

-----


Or they upgraded their compiler/enabled more optimization.

Combining sequential writes into an STM is a standard optimization.

-----


With STMIA it's an understandable optimization, but nobody would expect STMDA to work the way it does. That would require careful testing and a deliberate choice to use the weird one even though it feels counterintuitive.

-----


The order is guaranteed by the architecture though (not merely by the implementation), provided the target is Device or Strongly Ordered memory. ("For a VLDM, VSTM, LDM and STM instruction with a register list that does not include the PC, all registers are accessed in ascending address order for Device accesses with the non-Reordering attribute." -- v8 ARM ARM.) So you don't need to test at all, you can just rely on the documentation to tell you it works.

Incidentally, the note "Since the write is done with one instruction, a DMA cannot preempt the CPU in the middle of the writes" from the article is likely not correct. The STM may be only one insn but it may generate multiple memory accesses to the bus, so it's quite plausible that a DMA device might get accesses in between words. (Of course RAM is usually mapped Normal in which case caches and store buffers will be heavily reordering it anyhow, so nobody relies on ldm/stm ordering here.)

-----


The arguments used for contrained type parameters can be checked by the compiler, so no casting is necessary at runtime

"... C# does strong type checking when you compile the generic type. For an unconstrained type parameter, like List<T>, the only methods available on values of type T are those that are found on type Object, because those are the only methods we can generally guarantee will exist. So in C# generics, we guarantee that any operation you do on a type parameter will succeed."

Edit (more explicit quote):

"When you say K must implement IComparable, a couple of things happen. On any value of type K, you can now directly access the interface methods without a cast, because semantically in the program it's guaranteed that it will implement that interface. Whenever you try and create an instantiation of that type, the compiler will check that any type you give as the K argument implements IComparable, or else you get a compile time error."

Furthermore (related to the CLR avoiding boxing costs):

"I'm just pointing out that we do fairly aggressive code sharing where it makes sense, but we are also very conscious about not sharing where you want the performance. Typically with value types, you really do care that List<int> is int. You don't want them to be boxed as Objects. Boxing value types is one way we could share, but boy it would be an expensive way."

-----


What I wonder is: Take "SortedList<T> where T:IComparable". Now there is a generic add(T t) method, which needs to call t.compareTo(x). As add does not know the dynamic type of t, we don't know the vtable offset of compareTo for compilation. Thus the compareTo call cannot be compiled to "load method pointer from vtable; call it". We need something more expensive or JIT magic (traces,guards,etc).

-----


Regarding the CLR, the document links to MSDN [1] where it says that "the runtime generates a specialized version".

[1] http://msdn.microsoft.com/en-us/library/f4a6ta2h.aspx

-----


> What is rendering that HTML if not a browser?

The rendering engine (Gecko). I guess for it to make sense you must separate the drawing engine from the browser, which adds things like tabs, bookmarks, themes and add-ons.

-----


This can be seen today in Servo, which can browse the internet but has no GUI of to speak of (e.g. to visit a URL you must pass it as a command-line parameter to the executable).

-----


I compared the Win32_Products that are installed before and after. Even with all checkboxes unchecked, the installer adds 100+ packages. Unfortunately most of these are not listed in the add/remove programs control panel.

-----


Windows Management Framework 5.0 Preview [1] contains early versions of Microsoft's OneGet and PowerShellGet.

OneGet can install applications from repositories, using any number of providers. The preview version comes with a version of Chocolatey [2] built in managed code (C#) instead of PowerShell, but it supports the same Chocolatey gallery (a repository of software packages) and protocol.

PowerShellGet [3] can install PowerShell modules (e.g. make new Cmdlets available on the PowerShell command line). The modules can be delivered as scripts or as compiled .NET assemblies. By default PowerShellGet is configured to use a (closed preview) repository [4], which makes it not very usable, but it's interesting to know what direction it is headed.

As a more practical alternative to PowerShellGet, have a look at PSGet [4], a PowerShell module for installing PowerShell modules, with its own dedicated repository. Hopefully Microsoft's PowerShellGet will support PSGet as a provider in the future as well, the names are certainly confusing.

Desired State Configuration (DSC) [6] is a new (Windows 8.1) capability to configure Windows using a declarative syntax extension of PowerShell v4. It can set registry keys, create files and directories, enable Windows Features, and more. DSC 'resources' are PowerShell modules, so DSC's capabilities can be extended, see this GitHub repository [5] for examples.

Alternatively, have a look at Boxstarter [7]. It can do installation and configuration, and you can host your 'starter script' online and launch it with a single command. Boxstarter will take care of all Windows restarts that might be necessary along the way.

[1]: http://www.microsoft.com/en-us/download/details.aspx?id=4407...

[2]: https://chocolatey.org/

[3]: http://blogs.msdn.com/b/powershell/archive/2014/05/20/settin...

[4]: https://msconfiggallery.cloudapp.net/

[5]: http://psget.net/

[6]: https://github.com/powershellorg/dsc

[7]: http://blogs.technet.com/b/privatecloud/archive/2013/08/30/i...

[8]: http://boxstarter.org/

Be aware that, although most modern package solutions for Windows are using NuGet as a packaging format, using NuGet.exe directly (the application) and with nuget.org (the website) is meant for managing software development dependencies, not installing/updating end-user applications or command-line utilities.

-----


I was under the impression that Sphinx cannot return documents, only IDs. (The implication is that you need to query your data source if you want to show the results.) Is that correct?

-----


Yes, which I think in simple cases is a benefit. For example if you are using an ORM, the massive queries in the OP are really a pain to deal with. With Sphinx, you perform the search using the search API and get back some ids, then just query those ids using your standard ORM constructs for getting the (small) set of whatever object/rows you are loading. Since you are loading small and fixed amounts of data by the primary keys, the performance shouldn't be an issue.

If you are looking for caching and other more complex features, I'd recommend elasticsearch (and I do highly recommend it). But sphinx is simpler, and I think it is a good alternative for the type of functionality talked about in the blog post. Granted, I really don't think elasticsearch is all that complicated either, and it is also really well documented. But sphinx is just painfully simple for the basic use cases (like anything postgres can do).

-----


Additionally it helps to alleviate issues with stale data in the search index (which is often updated periodically ). If you have a list of ids to query for, there's no harm if one of them is no longer in the db, you just won't show it.

As ever, it's a tradeoff. I used to keep everything required in for search results pages in the search index (solr, at the time). Eventually I decided that the additional db lookup was well worth the extra few milliseconds to make sure I was working with reliable data.

-----


You can actually store a bit more than that in sphinx. You can add attributes to your sphinx documents that can either be used for filtering or just as extra metadata when returning query results. The downside is they get added to the index which has to fit in memory (I think, it's been a while).

-----

More

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: