Thursday, July 30, 2009
Blog Squishiness
Posted by
Jon Harmon
at
11:05 AM
Update: Fixed!
I picked a new template to reset some things that were broken... but that means my blog is so narrow now that some of my photos don't fit. I'll fix it soon... ish...
If you're reading this on RSS, you shouldn't notice any effect.
Wednesday, July 29, 2009
Multi-Parameter Firefox Keywords and Chrome Search Engines
Posted by
Jon Harmon
at
9:54 PM
By far, my most popular post is my guide on Firefox Keywords and Chrome Search Engines. I recommend reading that one to get caught up if you don't know what Firefox Keywords and/or Chrome Search Engines are.
Shortly after I wrote that, I found this post on Lifehacker explaining how to combine bookmarklets and keywords for some very clever and useful tricks, but it didn't work on Chrome as written, and I never got around to figuring it out. Tonight I finally took the time to figure it out.
The problem (besides the Lifehacker example being overly complicated) is that Chrome doesn't like {}'s in keywords. Those aren't really necessary unless your keywords are very complicated, though, so I got it to work. This is the code we'll be working with:
javascript:
var s='%s';
url='YOUR URL, WITH %s's FOR EACH TERM';
query='';
urlchunks=url.split('%s');
schunks=s.split(';');
for(i=0; i<schunks.length; i++)query+=urlchunks[i]+schunks[i];
location.replace(query);
As it says in the code, all you need to do is insert the url you want to use, with the usual %s's anywhere you want to insert a search term. You then set it up like any other keyword or search engine (see the other post for how to do that; note: you can just copy/paste the multi-line code into Chrome or Firefox, they don't require that you condense it down to one line first). To use it, just separate your search terms with ;'s.
This fairly simple code can let you do some very cool things. Here's an example for mapping between any two points in Google Maps:
javascript:
var s='%s';
url='http://maps.google.com/maps?saddr=%s&daddr=%s';
query='';
urlchunks=url.split('%s');
schunks=s.split(';');
for(i=0; i<schunks.length; i++)query+=urlchunks[i]+schunks[i];
location.replace(query);
I set this up as a Chrome search engine with keyword "mapp" (although something like "directions" might be easier to remember). If I type "mapp NYC; DC", I get a google map between New York City and Washington, DC.
A warning: if you include a space after the ; like I did in that example, you technically get a space before your second search term. I could probably over-complicate the code and clean that out, but it works fine for Google, and I can learn to avoid it once I set up ones that don't tolerate it. Oh, and if you include more ;'s than the keyword has %s's, the extra stuff will just be tacked onto the end (for example, "mapp NYC;DC;Other" tries to generate a map between "NYC" and "DC Other"
If you use this, please let me know in the comments if you come up with any clever ones. Remember: this works for any url that you want to insert something into, not just searches, so there are definitely some interesting possibilities out there waiting to be discovered.
Monday, July 27, 2009
Mad Science Monday, 7/27/2009
Posted by
Jon Harmon
at
11:35 PM
If you've been reading this blog, you might have found yourself wondering, "What exactly makes science mad?" Even if you haven't, I have quite a bit recently. I've been reading papers, searching for things that are suitably mad, and nothing seems to be up to snuff. So, both to let you know my process and to work it out a bit for myself, I decided this week I'd present:
Meta Mad Science Monday: Defining Madness
There's one definite requirement for a paper to make the cut for Mad Science Monday: it has to clearly be science, not engineering. The researchers have to be testing a hypothesis using controlled experiments, not piloting new technology.
Beyond that stipulation, there are a lot of signs that a study might be mad. Here are some of them.
1) Use of Mad Engineering as a Research Tool
When I saw a study involving implanting lasers in rat's brains, I knew there was a strong possibility that I was reading about mad science. Frikkin' laser beams are often mad engineering, and implanting them in rat's brains (and using viruses to alter those rat brains) cements that definition. Robots also often fit this rule. If the researchers are using mad engineering, there's a good chance they're doing mad science.
2) Mergers of Man and Beast
A lot of biological research involves human genes, or cognates of human genes, being tested in non-human models. But when researchers implant human genes into mice to test something unquestionably human—speech, in this case—there's a good chance we're looking at mad science. That particular paper also has another defining characteristic of mad science, which is why it launched this project.
3) Mad Quotations from the Researchers
If I see a story about some research in which they say, for example, "We will speak to the mouse," I know there's a good chance I'm looking at mad science. If you can imagine lightning flashing as the researcher shouts the quote, it's probably something I need to write about.
4) Quantum Entanglement
Any paper about quantum entanglement is mad science. Some of them are too thick to boil down into something fun to write about, but they're still mad science. That shit is just weird.
5) Research Involving Fear, Pain, Etc.
If the subjects of the research have to be scared, or pain has to be inflicted upon them, or otherwise the research sounds like it's on questionable moral standing when I first hear about it (before, inevitably, reading about the very humane protocols used in the research), it's probably mad science. This even works if the subjects aren't human, but the research has potential human applications. That borders on the next requirement.
6) Research with Clear Mad Engineering Applications
Clear applications usually aren't present in my favorite research, but if they're mad applications, they can make me take notice. If the research is aimed at, say, finding the formula for taking over the world, that's probably mad science. Research on weather control, giant weapons, doomsday devices, etc would also qualify as mad science, but I have yet to find anything good in this arena.
Those are the criteria I use right now. Right now I have a Rule 5 and a potential Rule 1 on deck, but they both look like weak applications of those rules. If you notice anything else fitting these criteria, or notice a criterion I missed, let me know in the comments.
Monday, July 20, 2009
Mad Science Monday, 7/20/2009
Posted by
Jon Harmon
at
6:25 PM
Are you a mad engineer looking to take over the world (or even a small section of it)? Do you also have a powerful, supervillainous ability to extend a simple example into a general principle? If so, this research is for you.
Mad Observations: In widely varying areas, animals follow leaders. This behavior ranges from ants seeking food, through birds and butterflies migrating long distances, to human politics. In all of these cases, the followers have a strong tendency (and motivation) to keep following the established leaders; if they didn't, particularly in the non-human examples, things would quickly be very bad for them.
Mad Reference: Note: This is a not-yet-published letter, not a peer-reviewed paper. It's basically raw presentation of research, which is what arxiv.org is for. "Effective leadership in competition." Hai-Tao Zhang, Ning Wang, Michael Z. Q. Chen, Tao Zhou, and Changsong Zhou. Full text available from arxiv.org.
Mad Hypothesis: As the authors state it, "is it possible for the minority later-coming leaders to defeat the dominating majority ones and how?" In other words, the hypothesis they're attempting to disprove is "It is impossible for minority later-coming leaders to defeat the majority leaders." If they manage to disprove that, they'll also have the how covered.
Mad Experiment: The researchers used a "generic model of collective behavior, the Vicsek model." As far as I can find, this is a widely used model for motion. Specifically, in this model individual motions are aligned to the average of their neighbors. In other words, if you want to think of this research in a more global context than just motion, you have to make the assumption that the individuals you're targeting will tend to follow along with whatever the people near them are doing. However, "near them" could mean "politically near them," for example, so it's not necessarily a bad assumption. Remember that translation of "near them" when pondering the rest of the findings, though.
In this model, the researchers introduced "leaders," which were simply individuals that did not simply align themselves to their neighbors. The followers obeyed the "follow your neighbor" rule, but the leaders were set to either move right (the established leaders) or left (the newcomer leaders).
After establishing the model with the right-leaders + followers, the researchers introduced late-coming left-leaders in various patterns and with various distributions. They measured how well these patterns of left-leaders were able to overcome the movement direction established by right-leaders.
They All Laughed, But: The researchers found that the late-comers were able to change the direction of the group, but that their ability to do so could be predicted based on two factors: the spatial distribution of the leaders (how far apart groups of leaders were) and the clumping of the leaders (how close together the members of a group of leaders were). Higher values for either of those factors increased the chance that the left-leaders could overtake control of the group. And, of course, it helped if the right-leaders had lower values for those factors.
Mad Engineering Applications: What these researchers found is that two factors help in taking over control of a group: wide distribution of the individuals working to change the group, but tight clumping of change-introducing individuals. In other words, it's good to spread out your leaders, but give them allies to work with locally.
It's easy to accidentally keep too much of the geographic part of this model, though. For example, if you translate the findings to politics, you should translate all of the model to politics. If you want to spread an idea throughout a group, you need people idealogically clumped to help each other influence others who are close to them idealogically, but it helps to also have such groups spread out to different places on the idealogical scale.
I'd be interesting to see other ways mad engineers could find to adapt this model to other scenarios. If you think of any, let me know in the comments.
Wednesday, July 15, 2009
Have a Horrible Day!
Posted by
Jon Harmon
at
9:24 AM
Today is the one-year anniversary of the release of Dr. Horrible's Sing-Along Blog. If you have not yet seen it, watch it now. If you have seen it and love it like I do, show them some love in return.
Remember: It's not about making money, it's about taking money.
Monday, July 13, 2009
Mad Science Monday, 7/13/2009
Posted by
Jon Harmon
at
11:36 PM
Dammit. Dammit dammit dammit.
You might think from that intro that I'm writing about one of the more widely talked about science stories from this week. I may get to that one eventually, but the actual source of my frustration is the realization this morning that I had gotten the wrong paper for this week's entry. Sure, the one I got is mad, but I didn't get the one by the same group that involves rats with frikkin' laser beams attached to their heads. I'm going to try to give as much of an overview of all of that group's research as I can, but I can't for the life of me figure out how they made the stuff work in the laser rat story.
You might think from that intro that I'm writing about one of the more widely talked about science stories from this week. I may get to that one eventually, but the actual source of my frustration is the realization this morning that I had gotten the wrong paper for this week's entry. Sure, the one I got is mad, but I didn't get the one by the same group that involves rats with frikkin' laser beams attached to their heads. I'm going to try to give as much of an overview of all of that group's research as I can, but I can't for the life of me figure out how they made the stuff work in the laser rat story.
Update: I got the other paper in the middle of writing this, but I'm still focusing mostly on the newer paper; the one with the actual laser beams appears to use many of the same techniques, and I haven't had as much time to digest it, so I'll stick to the one I'm mostly grokking.
But I'm getting ahead of myself. First you're going to need some...
Mad Observations: A lot of observations are necessary to culminate in this level of mad science. There's too much to cover it completely, but let's see what I can get.
First, there's an archaebacterium, Natronomonas pharaonis, that makes a protein that pumps ions across the cell's membrane in response to light (specifically certain wavelengths of orange light). This pump is called the Natronomonas pharaonis halorhodopsin chloride pump, or NpHR. This pump has been adapted to be expressed in mammalian cells.
Next, there's the known mechanism of epileptic seizures, namely that the'rey caused by cascades of electrical potentials; basically, one cell has more positive charge inside than outside, and that causes it to induce the next cell to switch to the same condition, etc down a line.
Mad Reference: "Optogenetic control of epileptiform activity." Jan Tønnesen, Andreas T. Sørensen, Karl Deisseroth, Cecilia Lundberg, and Merab Kokaia. PNAS, published online before print July 6, 2009.
Mad Hypothesis: This paper even included a direct reference to the hypothesis they were testing: "Therefore, we tested a hypothesis that epileptiform activity can be optically controlled by selective expression of NpHR in principal cells." In other words, they thought screwing with the potential differences across membranes (by pumping chloride ions into those cells) would stop the epileptic cascade, and they figured they could induce that change with light by putting those pumps into the right cells. They also tested whether putting in the pumps (but not inducing them with light) caused any changes in the behavior of brain cells, and whether turning on the pumps caused any other problems (like sucking up too many chloride ions, stopping other things that need them from functioning properly).
Just to make sure that's clear, what they were testing is whether shining a laser beam inside a brain would work to stop seizures. Obviously.
Mad Experiment: It just keeps getting better. To test whether shining a laser on brains might be useful for treating epilepsy, they infected rats with a virus. Ok, this sounds all kinds of mad scientist, but it's actually a fairly well-established technique. This virus, technically a lentivirus, was modified to incorporate the gene for NpHR into cells that it infected. That gene was put under control of the calcium/calmodulin-dependent protein kinase IIa (CaMKIIa) promoter, meaning that, no matter what cells the gene might get inserted into, the protein would only be expressed in certain cells--namely, brain cells. They also put the enhanced yellow fluorescent protein (EYFP) in the same virus, also under control of that promoter. That let them cut up some of the rat brains (and other rat bits) and confirm that the technique had worked to get the protein expressed in the right place, and not in the wrong places (because it'd be bad for light-sensitive proteins to be expressed on, say, the skin, where they'd be doing their thing all the time, not just in response to a frikkin' laser beam).
Most of the research, unfortunately, was done in cultured rat brain cells, using that same basic technique as I described above. But they showed that they could get the protein expressed in rat brains. That's important, since they'd already done other research with implanting frikkin' laser beams in rat brains, to stop Parkinson's tremors.
Yes, that's a photo of a rat having light beamed into its brain through a fiber optic cable attached to a laser. We truly live in amazing times.
So anyway, they got their protein into cells (both in rat brains and in cultured cells), and then they tested the cultured cells using established epilepsy tests. They did these tests on unaltered cells, altered cells without light, and altered cells with the correct wavelength of light shining on them.
They All Laughed, But: It worked. When they shined lasers on the altered cells, their idea worked; the seizures (well, technically simulated seizures, since it was just a plate full of cells) stopped. It looks like this would actually work. But the whole time I was reading it, I was thinking, "Um, so. You need a frikkin' laser beam implanted in your brain." But then I found out I was missing half the story, since they'd already implanted frikkin' laser beams in rat brains. So this whole thing would totally work, and all it takes is:
- infection by a virus to put an archaebacterial protein into your brain,
- glowing proteins engineered from jellyfish thrown in to make sure it worked,
- surgery to implant a laser (or lasers) in your head, and
- potentially a fiber optic system following you around (although I guess we're bigger than rats, so maybe the lasers could be worn directly).
I say this all jokingly, but apparently that's way better than the current system of curing intense seizures, namely cutting the hell out of the affected area, hoping you don't get too much that's useful.
Mad engineers: This one's all yours now. We scientists have shown it would work. Implementing this is all you.
Mad Science Monday Coming Soon
Posted by
Jon Harmon
at
5:54 PM
I'm trying to get another paper before writing this up. If I don't get the paper in time, I'll cover what I can from the one paper and the abstract of the other. In either case, I'll be back in a few hours with this week's Mad Science Monday. It's a crazy one, so stay tuned...
Wednesday, July 08, 2009
Subscribing to Specific Labels
Posted by
Jon Harmon
at
6:58 PM
I'm sure most of you reading this like to read everything I write, regardless of whether it's about politics, mad science, or food. However, some of you may only want to keep up on one aspect of my blogging. If so, here's how to subscribe to a specific label (aka tag) from my blog.
- Figure out which label you want (they're over there on the right), and note it. Make sure you have the spelling right. Click the label to see if there are any special characters you need (for example, "mad science" is actually "mad%20science", because the space has to be encoded as %20 so browsers can understand it; %20 is currently the only special encoding you need for any of my labels, but that could theoretically change in the future).
- Add that label to the end of this url: http://jonthegeek.blogspot.com/feeds/posts/default/-/ (for example, http://jonthegeek.blogspot.com/feeds/posts/default/-/mad%20science).
- Add that url to your favorite RSS reader
That's all it takes. Enjoy!
Note: The same trick works for any blog here on blogger, just replace "jonthegeek" with the url of the blog you want.
Monday, July 06, 2009
Mad Science Monday, 7/6/2009
Posted by
Jon Harmon
at
11:01 PM
It's Monday again (already!), so that means it's time for some mad science. It might not be immediately obvious how this week's article fits the theme, but I have one mad science stereotype stuck in my head now about this one, so hopefully I can get you there, too.
Mad Observations: Despite portrayals in media, scientists are human beings. Sometimes decisions made by human beings are clouded by emotion.
Mad Reference: "Large-Scale Assessment of the Effect of Popularity on the Reliability of Research." Thomas Pfeiffer, Robert Hoffman. PLoS ONE 4(6); e5996. 2009 June 24.
Mad Hypothesis: Research is not impacted by the trendiness of the subject of that research. Yes, I know; this is one of those hypotheses that is pretty much obviously untrue once you say it, but it's something nobody had said scientifically (and followed up with experimentation), and thus it was tacitly accepted as truth.
Mad Experiment: This is what's known as a meta-analysis paper. The researchers didn't perform experiments, per se. Instead, they analyzed over 60,000 published statements about 30,000 unique interactions between yeast proteins. This data set was drawn from papers focusing on specific interactions; each paper from which the 60,000 statements were drawn focuses on one or a few interactions, investigated using small-scale, focused experiments. They evaluated the "popularity" of the proteins involved in these interactions by how many times those proteins were mentioned (ie, more mentions = more popular).
They then compared that first data set to a second data set, gathered using high-throughput, mostly automated techniques. These high-throughput techniques don't focus on one or a few interactions, but instead test pretty much everything simultaneously. In other words, these techniques don't focus on anything in particular, so they don't "care" whether the interactions they're looking at are "popular" or "interesting."
They All Laughed, But: The reason this research seems mad sciency to me is that I keep imagining these researchers giving their speech about the popular researchers laughing at them. Well, who's laughing now?? It turns out, when you compare the results from the specific data with the results from the high-throughput data, popular proteins seem to get by on their looks. Specifically, interactions involving unpopular proteins tend to agree in the data sets more often than interactions involving popular proteins. Popular proteins have a higher proportion of likely incorrect interactions published than do unpopular proteins.
When I first read the summary of the research, I thought this might be an example of damned lies; I figured it wasn't necessarily that the unpopular protein research was correct more often, it was just that nobody bothered disproving statements about those losers. But the methodology here seems sound; it looks like the popular proteins really are getting treated differently. This points out a possible large flaw in current research, and a need to put more safeguards in place to prevent this stuff from getting through. Strong work, mad scientists. You have successfully exposed the flaws in the work of your enemies.
Subscribe to:
Posts (Atom)