Hacker News new | past | comments | ask | show | jobs | submit login
A turning point for GNU libc (lwn.net)
192 points by corbet on Apr 3, 2012 | hide | past | web | favorite | 86 comments



> The kernel may be the core of a Linux system, but neither users nor applications deal with the kernel directly.

The amount of complexity in a modern libc, glibc in particular, is mind-boggling.

One thing I love about Go is that it completely bypasses even libc, the Go runtime and stdlib use syscalls directly(well, except on Windows, for obvious reasons), in a way a Go program is "closer to the metal" than a C program.


> in a way a Go program is "closer to the metal" than a C program.

It's amusing how often people try to one-up C with their favorite language, and how these attempts are almost never successful.

You can write C that doesn't use libc, that has no access to any system services, or indeed that runs under no operating system at all. For example, an operating system kernel itself. In no way is a Go program "closer to the metal" than a C program.


I was referring to a 'standard' program using the language's corresponding stdlib. And while not as useful this days C's "stdlib" is incredibly thick and complex. Go's stdlib is probably comparable to Plan 9's basic C libraries in complexity, but is portable and provides much more functionality.

Also note that Go can do all the things you mention, including writing a kernel.

Just to be clear, the real issue with C is not the language so much as the horrible libraries that have been built over decades of clumsily piling up complexity.

And the people who suggest glib is the answer to C's library needs are scary and depressing, glib just adds another layer of (very unportable!) crud on top of libc for no real gain other than keeping people from learning how to properly write C (data structures in particular).


> I was referring to a 'standard' program

Though you probably didn't mean "standard" literally, writing C programs that have no access to the standard library is explicitly mentioned in the standard, which calls this a "freestanding implementation."

> Also note that Go can do all the things you mention, including writing a kernel.

You can't write memory management code in a language that provides automatic memory management. You just can't. And while I can believe that people who like Go are experimenting with using Go for most of an OS kernel (with a C kernel underneath it to handle things like memory management and probably interrupts), it's not the same as C where the whole kernel can be in C with the exception of highly hardware-specific code (booting, context switching code, syscall gates, etc). Aside from that, I don't think the performance of GC will ever be acceptable for a high-performance kernel.

> Just to be clear, the real issue with C is not the language so much as the horrible libraries that have been built over decades of clumsily piling up complexity.

Wait, now C is bad because some people wrote some bad libraries for it? Then don't use those libraries? No one's forcing you to use glib.


You do realize that people write kernels in Go, right?


That means that a Go program is exactly as close to the metal as a C program, not closer (the C runtime and stdlib use syscalls directly, too - the C runtime is libc).


What makes this advantageous for Go? I imagine it takes a lot of knowledge to optimize a libc for different CPUs and kernels.


It's not an advantage at all.

Google Go uses syscalls directly because its threads ("goroutines") were designed for a 32-bit architecture so their solution to not running out of virtual address space was to use small stacks that can grow. But that means it can't just call normal C function directly because there may not be enough stack, so they have to reinvent libc on every platform they port to. Meanwhile virtually every desktop is 64-bit and even ARMs will soon be 64-bit.

Why design a new language around 32-bit computers? Good question.


The disinformation in your post is mind boggling. Go was not designed for 32 bit, segmented stacks are great because of reasons that have nothing to have with address space addressability, in fact there are C implementations that use segmented stacks, segmented stacks do not preclude using the system libc, the reasons for bypassing libc were completely different, and calling C code from Go works just fine.


So, any hints about the real reason for passing up libc? Sounds like an interesting story.


Or in other words "you're wrong because I said so. upvotes please!".

Thread count vs stack space is a well-known tradeoff in 32-bit. It doesn't take a rocket scientist to know this is the reason for segmented stacks in Google Go even if there's no smoking gun (wayback didn't archive the golang site for a long time due to a misconfigured robots.txt).


Given that is well known Go was developed mainly in a 64bit environment and that the 64bit compilers produce much faster code and have been much more tested than the 32bit toolchain, to the point that some people have complained that Go favours 64bit systems too much (if for no other reasons because that is what the main developers use), your whole argument is clearly nonsense, that you are a well known troll with some kind of fixation with bashing Google is just coincidence.


In Rust we use the system libc with segmented stacks, and we call C code by switching stacks. (This has been observed to get us into trouble with the indirect branch predictor, unfortunately, but we plan on addressing that by duplicating the stack switching stub for each C function.) Go is similar (I looked at its code when implementing some of this stuff). So it's not true that having segmented stacks undermines Go's ability to interact with C code.

Note that another cool thing we can do is to statically link clang-compiled C code with Rust code and LLVM can inline your small C functions directly into your Rust functions and save you the stack switch -- this is one of the coolest things we can do with LLVM, IMHO.


That's interesting. What is the cost of switching the stack when calling C code? Do you allocate a new stack every time a C function is called from Rust?

If you inline a compiled C function, don't you have to do the stack switching for whatever that inlined function might call? Or do you only inline code that doesn't have any function calls?


Ok I understand that you can switch to a larger stack to call C code, although as I said you can't "just call normal C function directly". You can even discount the cost to acquire it by having it stick with the thread using it (no locking, alloc, etc after the first use).

That might be ok for Rust where many 'tasks' might be computational only and never call a C function. But with Google Go, I think they said "we want to have a million threads each reading a socket" or some other arbitrary limit which isn't possible in 32-bit using system threads (even with segmented stack).

Does Rust do it's own threading, and if so what are the advantages you perceive to doing so?


Interesting. Is there a place you can point me to, to read up on this issue?


From the horse's mouth:

http://golang.org/doc/go_faq.html#goroutines

When they say "system resources" they mean stack space. It's limited in 32-bit because a stack large enough to be useful takes up too many virtual address space. For instance an 8 MiB stack takes up 1/512 of the 32-bit address space so you can only have max 512 threads, but in 64-bit you can have 33 million (current CPUs limited to 2^48 address space)

So they segment the stack in Google Go to get more than some small number of threads. Even though Linux (edit: the kernel) only uses 4k or 8k per thread some OSes have severe limits, so they compound the segmented stack problem (making C interop slow and cumbersome) with userspace threading (M:N). A collection of links why that's a bad idea:

http://www.kegel.com/c10k.html#1:1

Eventually they'll concede that this wasn't a good idea, but I wouldn't bet on when.


There are many more system resources associated with threads, most of them are more important than mere virtual address space consumption, like context switch time and thread creation time.

Linux doesn't use 4k or 8k stack per thread, pthreads default on Linux is 2MB. The only thing that's 4kB or 8kB is the kernel stack which doesn't have anything to do with this (the kernel stack 12kB or 24kB on Windows). You might have heard about this "grow stack on demand" thing, but every OS does it, not only Linux and it's about growing the committed memory, the virtual memory used by a stack is fixed at thread creation.

Calling C code is very fast and it's trivial to do from Go, http://golang.org/doc/articles/c_go_cgo.html. Segmented stacks are available in some C implementations as well.

It's not Google Go, it's simply Go, there's not a single Google reference on the page and that's intentional. Looking at your previous posts I see you have an obsession with Google.

In short, you have absolutely no idea about what you are talking about and you spread malignant misinformation.


He's a notorious anti-Google troll on Reddit. He adds "Google" to "Go" so that his negative posts will be associated with both Google and Go.


There existed a programming language called Go before Google came along with their programming language also named Go.

Not that anyone knew of the older Go before the whole naming debacle, but it may not just be anti-Google trolling to refer to it as Google Go.

/devils advocate hat off.


That one has a Yahoo! style exclamation point ;)


Does anyone know why the Go runtime is written in C? The FAQ says it is to get around bootstrapping, but that does not seem obvious, as the Go compiler (6g) is written in C, so why cant the runtime be written in Go and compiled with the rest of the Go library?

Also, is the incompatibility between Go and C just due to the segmented stacks in Go? What about function call conventions? Anything else?


If the runtime were written in Go, what would the runtime's runtime be?


Well given that the current runtime (written in C) doesn't have a runtime, I would assume there wouldn't be one, as Go compiles down to native code just like C does, using the same toolchain (6c/g -> 6a -> 6l).

My current guesses are: 1) Performance 2) The need to call assembly code easily


C is capable of running without a runtime, but Go isn't. This has nothing to do with compiling down to native code and everything to do with the language semantics.


Right, so native code generated by Go probably needs some kind of runtime initialization before it can even start executing.


Among other things which aren't expressible in Go, that's part of it. You could do those bits in Assembler and the rest in Go if you really wanted to stay out of C, but the net effect is just that the language runtime becomes harder to port to new platforms.


Please note that you can call assembly code from Go easily.

The runtime needs to break the abstractions Go provide, it needs to break Go's memory model, it needs to be aware of the garbage collector, it needs to be aware of stacks, stack growths and switch them, and it needs to do other things: http://research.swtch.com/goabstract.

C was chosen as the pragmatical choice, they could have done it all in assembly (various bits are written in assembly), but C is a portable assembler.


> There are many more system resources associated with threads, most of them are more important than mere virtual address space consumption, like context switch time and thread creation time.

I think you should read the M:N scheduling links. I'll leave it at that.

> Calling C code is very fast and it's trivial to do from Go, http://golang.org/doc/articles/c_go_cgo.html. Segmented stacks are available in some C implementations as well.

http://golang.org/src/pkg/runtime/cgocall.c

Read the comment at the top. Hardly trivial.

> It's not Google Go, it's simply Go, there's not a single Google reference on the page and that's intentional. Looking at your previous posts I see you have an obsession with Google. ... In short, you have absolutely no idea about what you are talking about and you spread malignant misinformation.

Shoot the messenger. Maybe I post from a different account when I am critical of Microsoft, or Apple. Why would that matter to you?


> I think you should read the M:N scheduling links.

I think this bears emphasizing.

Slow language runtimes, like Erlang's beam.smp, can get away with using M:N threading because the overhead elsewhere is so high. However, once you're dealing with languages implementations which are reasonably performant, which includes Go, M:N threading has problems which suddenly surface above the waterline. This includes various inefficiencies due to two schedulers fighting each other, significantly reduced system observability, and bizarre runtime behaviour (again, due to two schedulers fighting each other). If the ability to understand and maximize performance matters, M:N threading is a disaster.

We've been down the M:N road before for efficient systems (Solaris, Linux, the *BSDs, and others). Those implementations -all- died. Now Go (and Rust, both amazingly and depressingly) have resurrected this. I can only hope that the programming abstraction niceties that Go takes advantage of are worth the performance tradeoff, because there is a definite tradeoff.


> I think you should read the M:N scheduling links. I'll leave it at that.

Of the links in that list, 2 worked for me, and both were from 2002. Not exactly up to date information. Also, Erlang, does something similar w.r.t. to multiplexing multiple processes onto fewer threads, so Go isn't exactly alone in this regard. Even Twisted, and Node use a weaker form of this, where everything runs on one thread.

> Read the comment at the top. Hardly trivial.

First, the comment's meant for language implementors. You don't need to read this comment to be able to do C interop in Go. Second, this isn't even that complicated... did you imagine C interop with other languages is done in a nice, simple way? Any language needs a way to translate calling conventions from the source to the destination and back when doing interop.

And, yes, please stop calling it Google Go. People hardly ever say Microsoft C# or Sun Java or Apple Objective C, or Ericcson Erlang, or Netscape Javascript...


> Of the links in that list, 2 worked for me, and both were from 2002. Not exactly up to date information.

The links for Ingo and Ulrich worked, these are enough of an indictment of M:N threading. What's changed in the last decade? OS gurus tried M:N threading and it failed. Java tried 'green threads' and it failed, and now Google Go devs are trying the same thing. Guess what's going to happen?

> Also, Erlang, does something similar w.r.t. to multiplexing multiple processes onto fewer threads

Erlang was written in Prolog and the desktop VM was single-process until several years ago. That's not a good example implementation to base a design on for a new language. Meanwhile the good things like separate heap/gc per 'thread' weren't copied. /forehead

> did you imagine C interop with other languages is done in a nice, simple way? Any language needs a way to translate calling conventions from the source to the destination and back when doing interop.

I don't think I know of another language that locks another thread, transfers control to it, and then finally calls the C function. That's crazy talk -- why they say "down the rabbit whole" in that comment. Most languages do some type checking (for instance error on passing a BigInteger > uint32_t to a function taking uint32_t) and marshall the arguments then call the C function. Maybe they need to pin an object or convert it to a C-friendly form (make a struct that describes properties of the object). But yeah in general the C interop is very simple in most languages.

> And, yes, please stop calling it Google Go. People hardly ever say Microsoft C# or Sun Java or Apple Objective C, or Ericcson Erlang, or Netscape Javascript...

I don't say "Google Dart" because Dart isn't an unsearchable, ambiguous name for a programming language. I do say "Apple Blocks" because 'blocks' is unsearchable in the context of programming languages. You can't blame me for Google choosing a terrible name.

Edit: What do you think it says to neutral readers when facts, reasons, links are downvoted?


> What's changed in the last decade?

The nature of the problems we are trying to solve, and the hardware we are solving them on.

> But yeah in general the C interop is very simple in most languages.

You're not taking into account the fact that the C code cannot be allowed to block the main loop of the program; as is true in any event driven system. Yea, other programming languages might not have to deal with concurrency issues, either by ignoring threading, giving up on an asynchronous system, or giving up on C interop; none of which seem like a worthwhile sacrifice. What is your problem with this implementation technique? It's performant in practice, and its not in your face when actually writing code. This is not complexity you have to worry about. It happens "behind the scenes" as it were.

Go might be challenging to search for, but you're not writing search queries, you're writing posts about programming. It's pretty unambiguous as far as I'm concerned.

And fyi, I've never downvoted you. I think with the increasingly desperate tone in your writing, its not hard for people to realize that you're just hating on Go, and at least the responses to your misinformation are educational, and often interesting.


> the hardware we are solving them on

The increase in cores, NUMA now being pervasively inescapable, and context switching becoming cheaper, actually makes this problem worse for M:N threading.

I don't have a problem with Go using M:N threading, but we must also be honest that something is being given up in order to gain something else; Go has a nicer programming model, but its performance envelope shrinks and becomes less predictable.


It may not be a victory across the board, but it certainly isn't a loss across the board. That's all I'm saying.

Let me answer some of your points more specifically: NUMA is a concern, but with some work on the schedular to give goroutines slight affinity for threads, this can be largely mitigated. This could be as simple as a scheduler policy like `take the first queued goroutine that previously executed on this thread, looking upto 5 into the queue, otherwise take the first one` instead of `always take the first one`. The difficulty with this strategy is you could experience starvation of goroutines, and there are a ton of other complexities, which is why the current scheduler is so simple. I believe this will get better...

Context switching isn't the issue with threaded implementations of servers. Writing a server with a thread per connection is a bad idea because of the memory requirements. 1000 connections will lead to gigabytes of memory in use.

I don't think Go's performance envelope has shrunk or become less predictable, IMO. I think what we've given up is control over what threads execute what goroutines, essentially, the NUMA argument above. This will hopefully get better with time.


> less predictable

Alas, it is, because there's no communication between the kernel scheduler and the user-space scheduler. The resulting interaction has non-intuitive results.

Here's an old paper about some of the measured consequences: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50....


> I think with the increasingly desperate tone in your writing, its not hard for people to realize that you're just hating on Go, and at least the responses to your misinformation are educational, and often interesting.

I get exasperated is because you guys rarely ever back up your claims. For instance, the response I get to what changed is 'things'. What things? Did synchronization between threads get 100x more expensive? Does unpredictable blocking like from page faults no longer happen? Are real time and consistent scheduling no longer important? What 'nature of problems' changed?

> You're not taking into account the fact that the C code cannot be allowed to block the main loop of the program; as is true in any event driven system.

Why can't the C code block? Because they do their own M:N threading and segmented stacks. Why do they do that? To run on 32-bit without complications. That's exactly the point I've been making.

Why in 2009+ care about a 32-bit implementation? Like I said originally, good question.


"What's changed in the last decade?"

Uh...the need for highly scalable web servers handling tens/hundreds of thousands of lightweight, I/O bound, concurrent requests?


I thought there were two implementations, a 32bit and a 64bit


Unfortunately, Go with its mandatory GC is about as appropriate for systems programming as Java, regardless of how its marketed...


I don't know. Linus claims he can imagine writing an OS in Go (after they've spent a decade working the bugs out). Which is probably the most positive thing I've heard of him saying about writing kernels in other languages, and I'm inclined to trust his judgement over my own here.

EDIT: My memory was almost correct, he said that (in sharp contrast to C++) he can imagine letting Go into the kernel but not for a long time. http://www.realworldtech.com/beta/forums/index.cfm?action=de...

EDIT2: s/Linux/Linus/


You're really misrepresenting that thread. Here[1] Linus says C is pretty readable in patch format, because it doesn't do overloading (unlike C++, which was the thread's conversation topic) and thus doesn't much depend on some outside context. Also, C gives the required level of control for kernel programming, in terms of memory model and concurrency (and those conditions implicitly exclude Go), but higher-level languages have something to offer for other problem domains; and in that context GC or concurrency primitives become interesting again.

In answer to that last paragraph, someone asked about Go, which is where you got Linus's overall noncommittal answer.

[1] http://www.realworldtech.com/beta/forums/index.cfm?action=de...


Go allows the same of control over the memory layout as C, this differentiates it from most other garbage collected languages.

Go is a perfectly fine language for kernel mode development, I am aware of at least three efforts of writing an operating system in Go (one such effort being mine).

I do agree though that Go is not the right language to use in conjunction with the rest of the Linux kernel. While Go used for a kernel is perfectly fine, it forces you towards some design decisions and constraints that do not exist in the Linux kernel as it stands today.


I believe you meant Linus.


For the more traditional definitions of "systems programming", yes. I would agree with that.

However I don't think anybody is really marketing Go for that niche.


Originally they were targeting "systems programming", and, being roughly the contemporary of the most well-known go authors, "systems programming" used to mean compilers, assemblers, and the like. In this meaning, this would also include browsers, although Mozilla is going for Rust.

Today, "systems programming" is more taken to mean "kernel programming", and I don't get the sense that is what was meant by the senior authors of go.


I always believed "system programming" meant "programming operating system components or the whole operating system" and Wikipedia seems to agree:

http://en.wikipedia.org/wiki/System_programming

Last time I looked, inside of the Go libraries which had to do some tricky things there was some C code, meaning that even Go authors weren't able to write everything in Go. So Go can't be a substitute for C, and as long as most of the system programming is done in C, no go for Go.


That's like saying C is not a system programming language because if you write a kernel you need to write parts of it in assembly, and you need to write parts of libc in assembly as well!

Parts of the runtime are written in C and assembly, but that's because there is power in breaking abstractions: http://research.swtch.com/goabstract

As it stands today, Go can be used to write operating systems. Yes, you will have assembly, yes you will likely also have C, so what?


That is what "system programming" means these days. It used to exclude operating system kernels, real-time components.

I suspect that it isn't so much that they aren't able to write everything in Go, but more that it isn't a priority for now. It wouldn't surprise me if Go wasn't eventually self-hosting.


Originally they were targeting "systems programming"

Choose the greatest impediment and cite it as a primary goal.


I am not quite sure if I follow you. Are you saying that "systems programming" is the greatest impediment?


Targeting systems programming is. The existing languages used there are rather firmly entrenched.


And that's a problem. If you want to do system programming, there's C. And that's pretty much it. I don't think this is a very favorable situation.


Why not? It's the language that was used to write the OS, which is why Perl and shells still hold sway here as well.


Me thinks Go's niche is to be an accessible, imperative Erlang.

It is already well equipped to write distributed network servers, web servers, message queue servers and such.


I'd rather reply to wglb, but he's gone too deep...

Anyway, "systems" in this case includes daemons and server processes. I'll update with a reference if I can find it in the Go context.


>"Anyway, "systems" in this case includes daemons and server processes."

For that sense of "systems", Go excels and Java enjoys a great deal of popularity.


I can, and in fact I do write an OS in Go, it's suitable enough for writing a kernel for me.


Link? :-)


I can't show a link to the work I do now, sorry, but here's a link to a toy operating system written in Go that proved to me it could be done and inspired me to try it myself:

http://code.google.com/p/gofy/


What do you find incompatible between GC and systems programming?


I think people generally assume systems programming requires low latency and that using a GC implies high latency. That's mostly, but not strictly, true. If you're writing a web server, as opposed to a missile guidance system, the occasional GC pause is probably not a dealbreaker.


Here's an article from IBM on their work on real-time Java, where they develop a GC that can provide a bound on latency, and on the amount of time spent doing GC: http://www.ibm.com/developerworks/java/library/j-rtj4/

Based on that, I don't think I'd rule out the feasibility of doing a missile guidance system in a GC language.


> use syscalls directly(well, except on Windows, for obvious reasons)

Not obvious to me. Please elucidate.


Microsoft doesn't make the specs for the interface between the Windows kernel and the win32 libraries public.


Nor it is been stable. Win8 for example changes this interface.


Even the Windows API `system calls', like CreateProcess, are just wrappers around NT system calls. There are other wrappers too, like the Subsystem for UNIX Applications (which implements a POSIX API on top of the same NT calls).


You cannot call syscalls directly, as the syscall table is generated dynamically from the kernel source code on NT. The closest thing you can do is call the Nt* class of functions, which are often direct thunks to syscalls.


I think the news here should be "lots of normal people are still surprised that the FOSS movement is littered with uncooperative sperglords everywhere you look".

The deeper and more complex a project, the higher my expectation for this type of behavior. I imagine a big venn diagram of "manages ultra-complex software in use by millions" and "social skills". Linus, of course, being the shining counterexample.

Seriously though, anyone familiar with FOSS projects should be entirely unsurprised by this. If you think I'm joking, go read some of Stallman's rants or peruse esr's "how to have sex" faq.


sperglords is a term i plan to use in the future.

(i think you mean a disjoint venn diagram)


These links (not mine) may help explain a lot to those who may be unfamiliar with Ulrich: http://urchin.earth.li/~twic/Ulrich_Drepper_Is_A_.html


Sweet Jesus, things like this really illustrate some dark corners of free software development:

http://sourceware.org/bugzilla/show_bug.cgi?id=4980

Where the high-publicity trolling in the later part of the discussion is even worse than Uli's initial stubborness.


Is eglibc likely to merge into glibc?

I was under the impression that one of the reasons Debian moved from glibc to eglibc was friction with the project maintainer.


From what I've been reading, there will be efforts to remerge.


Are you sure you should be submitting "LWN subscriber-only content"?

It seems like they might not be too happy about that.


Check my account name against the author of the article :)

I appreciate your concern. I do occasionally post a subscriber link in a place like HN or reddit with the idea of sharing some useful news and making people aware of what we do. So you shouldn't be worrying about me abusing the LWN subscriber link mechanism (which I implemented in the first place); instead, you should worry about my shameless and transparent marketing efforts :)


Could you please comment on the appropriateness of other subscribers sharing a link like this?

Within certain bounds - glibc being a good example - the general interest in the article might justify its widespread use.

I would hazard a guess that for the majority of articles, LWN would prefer subscriber links be kept relatively "out of public circulation."

Thanks!


Keep in mind that the content stays behind the paywall for 1 week before non-subscribers can access it. So unless a paid subscriber is posting links to every single article, I don't think it's that much of an issue for LWN.

So given that it's really only subscribers-only for about 24 hours more(the original article was posted on March 28th, I don't think that LWN cares all that much.


That's why the share links where created.

When I share subscriber-only LWN content (which I do occasionally), I specifically call out the quality of the content and the fact that it's a subscription site. One of the few I actually do pay money to.


Thanks for sharing, that was an informative article! Well done.


> you should worry about my shameless and transparent marketing efforts :)

Which totally just worked on me, by the way.


As OP has pointed out, he's the LWN editor.

But besides that, LWN has a feature to allow sharing subscriber-only links for a reason. They want people to share those links and discuss them occasionally, as long as someone doesn't create an aggregator that posts all subscriber-only links. And all of the articles become public after a week, anyhow.

They are not trying to prevent people from discussing or reading articles; they are merely trying to encourage people to subscribe if they want to be able to see all of the subscriber-only content as soon as it's posted.


Can I just say that if you can subscribe, LWN is definitely worth every penny. It's everything a professional tech journal should be: detailed, topical, opinionated without being arbitrary, and consistently informative. They also do good deals on institutional subscriptions, which is useful if you want to talk to anybody else about what you've read :)


I think OP is LWN's editor!


"Also significant is the fact that Ulrich left Red Hat in September, 2010; ...he is now VP, Technology Division at Goldman Sachs."

For me this is the punch line to the story. I wonder how he's liking wearing a shirt and tie to the office.


If I remember his temperament correctly, he'll love it on Wall Street. He'll have an entire division to yell at.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: