Our first European family vacation...

We're back in the States after a great couple of weeks in the Basque Country of Northern Spain. While travelling with a seventeen month old toddler wasn't exactly pleasant, at least it wasn't the horror-show we were expecting. We couldn't imagine sitting on the 10 hour flight from SF to Amsterdam with a squirming ball of energy on our laps, so we splurged a little and got him his own seat.  

To lug around everything a toddler needs, we were able to find a really nice rolling carry on suitcase that fits comfortably under the seat and opens wide from the top. We filled it multiple small tetra-packs of milk, yogurt, cookies, and bread sticks, as well as diapers and creams, bottles, and a few busy toys including a portable shape-sorter. Thankfully, our son spent most of his time playing with the touch screen on the seat in front of him or raising and dropping the table tray, the latter of which annoyed the flight staff during take off and landing. Other than during a small bout of turbulence on the way there, our travel was almost entirely tear and drama-free.

Well, drama-free but a bit boring. As a general rule, I don't mind long flights; I'm more than happy to pack a few novels, an iPad, and one (or two) portable gaming systems keeping myself occupied pretty much the same way that I do back home[1]. Sadly, this is quite a bit more difficult with a curious toddler sitting next to you. Eating is also quite complicated - we had to get creative with ways to keep our son from grabbing food and beverages and spilling them everywhere. Alas, he still managed to kick at least two beverages into the aisle and pour a bottle of water over himself. Any landing you can walk away from, right?

Spanish family and friends wanted to see the baby, of course, so we didn't do as much sightseeing and travelling as in our previous trips. Thankfully, my Castilian[2] has gotten better, and I didn't feel quite as oblivious to everything that was going on around me like in previous trips. Hanging out with great people while eating and drinking amazing food makes for a great trip, sightseeing or no. Speaking of which, I'd like to finish up this post by spending a bit of time praising the Basque Country's culinary prowess. 


Even though Europeans won't shut up about it, I can't help but agree as to how much better the food is there. Yes, you can pay to eat at high-class restaurants with excellent food in North America, but the average family meal in Basque Country is of much higher quality than what most of us eat. Part of this is because of the concept of the 'siesta', whereby many shops and services close between one and five in the afternoon giving people time to go home and prepare meals[3]. It's also because many foods are simply of higher quality because they're purchased frequently and therefore don't have to be processed for long term preservation: bread, eggs, cheese, meats, etc. There are many local bakeries and butchers in each small town and most people buy fresh bread every single day. I'd love to have similar access to freshly prepared foods in North American towns, but it's really population concentration that make this possible: it's much more impractical to buy fresh bread if you can't walk to the local bakery in two minutes[4].

Another important difference comes down to portion sizes. They're smaller in Europe, and yet I never feel hungry after meals. Eating is a much more leisurely activity (in Spain at least), which probably leads to eating less due to the delay in time-to-satiation. The upshot of this is that I can spend two weeks in Spain during which I eat some of the best food I've ever had, and yet I don't 'feel fat' afterwards. 

The Basque people are particularly passionate about cuisine: as a hobby, many men belong to 'Sociedades' (or txoko in Basque), which are essentially cooking clubs where people can develop recipes with professional equipment. It's not surprising that 'El Pais Vasco' has the highest concentration of Michellin Stars per-capita of anywhere in the world.

... and drink

I've read multiple articles arguing that part of the reason for America's 'binge-drinking' problem is because of the country's long history of demonization of alcohol. I'm not saying that Spain doesn't have problems with alcohol, but in my time here I've witnessed a much more reasonable relationship with 'the drink' than I often see back home. 

Small towns have bars on every corner where people socialize quite frequently. It's totally common to see two couples sharing drinks and chatting while their kids snack on bar food. Oh, and the bar food here is legendary. Bars serve pintxos (pinch-oes), which are elaborate little plates showing off all kinds of different concoctions and ideas. They're a huge point of pride and the pintxo capital, San Sebastián (Donostia in Basque) has competitions to encourage new and interesting recipes. These things are ridiculously delicious, and put garbage like fried cheese sticks and onion rings to shame:

This is an awesome pintxo place called  Bar Zeruko . Notice how the counter is filled with a huge variety of exotic and delicious finger-foods.

This is an awesome pintxo place called Bar Zeruko. Notice how the counter is filled with a huge variety of exotic and delicious finger-foods.

[1] I also brought a solo-play 'game book' called Destiny Quest that was recommended to me by a friend. I didn't get to play it as much as I would've liked as rolling dice on those small tray-tables is difficult.

[2] There are at least three fully differentiated languages spoken in Spain (Castellano, Catalan, and Basque [or 'Euskara' in Basque]) as well as several dialects such as Galiciano. I understand why people from cultures that speak languages other than the dominant Castellano bristle at calling it 'Spanish'. 

[3] The siesta blows my mind every time. In addition to being part of the culture, I also appreciate that by allowing people to close their stores down for a few hours in the afternoon, it allows small businesses to operate with only a single staff member. Nevertheless, coming from a place where I can buy whatever I want, whenever I want, it's tough to wrap my head around having to plan my schedule to avoid a three-to-four hour window each afternoon. This is probably a more general observation about European culture: if you're cool with it, it's very supportive of a high quality-of-life. If you're not used to it, it can feel somewhat rigid. This is something I often think about: when is culture an anchor versus a chain?  

[4] In some cases, I think there are just differences in preparation that lead to more flavorful foods in certain parts of Europe as compared to North America. See for example the larger number of regulations when it comes to processing eggs here in the States.

2016 in review...

I have to admit that 2016 was a bit of a rough year; and not in the John Oliver "F$%k 2016" because of Brexit and Trump's election kind of way (although there is that too). Rather happenstances at work coupled to the realities of being a relatively new dad conspired to complicate things.

As parents we learned that 'with mobility comes great responsibility'. Once our son was able to run around the apartment, gone were the days of being able to sit down and read or use a PC. As I should have expected - though perhaps hadn't fully appreciated - this led to an utter tanking of at-home productivity. In fact, I daresay that 2016 is the least 'productive' year I've had since becoming an adult. Don't get me wrong, a large part of the reason that I left academia was because I felt that the poor work-life balance wasn't justifying the rewards. Here I'm talking about the ability to do anything at home - be it work, hobbies, blogging, etc. Also, because we've been more-or-less sticking to a 'no-screens' policy, I've pretty much fallen off of the face of the earth when it comes to staying in contact with folks via FaceBook or email.

I knew that having a child would mean giving up a lot of free time, but I've seen other folks manage to continue productive postdocs or careers whilst being new parents. Unless you count watching the entire run of The Sopranos in short bursts once the baby was put to bed and the kitchen was cleaned, I did little at home, work or play. I know, I know - this probably shouldn't be a big deal, and again, I should've expected it. But as someone who, as a postdoc, used to work most weekends, I really felt like I was accomplishing nothing.

Things kind of took a weird turn career-wise as well: organizational changes and reprioritization of objectives led to my main project shifting unexpectedly. I was quite passionate about what I was doing and, unfortunately, it did bum me out. On the plus side, this forced me to learn a number of new skills and explore a completely new field. Thankfully, things are looking up for 2017 as I'm actively working on multiple manuscripts detailing our work on my 'passion project' and it looks like I'll be heading up some more exciting stuff.

I feel like I tell myself this every year, but my main goal for 2017 is to better balance the demands of work, family, and play. First, I'm crossing my fingers here, but I assume that our son will soon (eventually?) be able to occupy himself sufficiently such that I can get a little more work done here and there. You know, to keep treading water if nothing else. Second, I've always been pretty bad about staying in contact with folks via email, but as mentioned above in 2016 I fell off of the map. To be fair, our 'no screens' policy has a lot to do with this, but I don't think that becoming a hermit will do anything to help with my feelings of accomplishing nothing. Oh, and as always, I want to get back to blogging. Though I've wanted to get back to blogging since like 2009[1].

I'm not calling these resolutions, because those typically fail. Instead, I think that my holiday break is as good a time as any to adjust my priorities and objectives such that I can achieve as many of them as possible. All of this being said, I realize that I'm basically building a house of cards atop the expected behavior of a toddler. Plan B will be to start pounding two cans of Monster™ every day instead of one. 

[1] I have a good reason for wanting to blog. As I may have mentioned before, writing is a skill that benefits from regular maintenance. When I was blogging and writing papers regularly, sentences and ideas flowed much more naturally then they do now. If you want to get good at coding, code every day. Same goes for writing, no? Alternatively, I may just be getting old. Well, I'm certainly getting old.

Reflections on parenthood: one year in...

Our son recently had his first birthday, marking a milestone that I felt was worth reflecting upon. In some ways, the past twelve months have felt like five years, despite all of the talk about kids growing up so fast. In other ways, it's also been pretty fulfilling.

The best part of being a parent is seeing our son go through the fascinating stages of childhood development: beginning as a helpless, chubby little thing, but soon figuring out how to sit up, crawl, walk, and eventually, say a few words. Folks who know me also know that I generally haven't been a 'kid person', but things are completely different when it's your own. He gets so excited whenever he learns to do something new that it's infectious, and lasts for weeks afterwards. I've avoided being that annoying parent on Facebook, but the family's been receiving their regular dose of baby pics and videos.

As noted in a previous post, we've also discovered that we have a very active baby. As he's stumbled upon new modes of locomotion, he's become less-and-less interested in sitting still in his high-chair, stroller, or car seat. We're currently visiting my family in Canada, and I don't think that the grandparents quite realized how tiring it can be to follow him around for hours, as he tries to pull on everything, whether nailed down or not. You very quickly realize how much more effort this takes when you're visiting a non-baby-proofed home.

In retrospect, this is the aspect of parenthood that I was least prepared for: the complete inability to accomplish anything while our son is awake. On a typical weekday, the baby goes to bed sometime between 7 - 8:30 pm. By the time we've cooked dinner and cleaned up, it's easily 10. I know that many folks try to get in a few hours of work before bedtime, but sadly, I've been terrible at doing this. On a good night, my gf and I get in a 30 - 60 minutes of Netflix before one of us passes out. This has also had a pretty serious impact on my fitness routine and, sadly, I've begun to fall into 'dad bod' territory.

I work with several new parents and, from our discussions, I know that I'm not alone. We all feel the challenge of working in positions with semi-frequent spikes in workload and we all have a hard time taking our work home with us. As I've lamented before, living in the Bay Area necessitates existing in a two-income household, and I have no idea how some people can maintain the work demands of a postdoc or academic position without some form of extra-party solution [1]. I know that having local family available to take care of the baby once-in-a-while would make a huge difference for us: we've only hired the services of a babysitter once in the past 12 months, and it was wonderful.

We knew that having a kid would be a life-changing event. But, depending on where you live and your support situation, I think that it can be far more 'life-changing' than expected. Regardless, things are looking up, and our son seems to be falling asleep a little bit easier and sleeping a little longer (those 5:30 am mornings were awful). 'Fingers-crossed', but in a few more months he may actually be able to tell us what he wants, leading to fewer struggles to make him happy.

Now our attention has shifted from simply keeping him alive to seriously thinking about the future: if San Francisco and its surroundings continue on their current course, no reasonable amount of two-person income may get us to where we want to be in terms of living conditions, and unfortunately, we may have to begin considering alternatives.

[1] One extra-party solution that I've seen quite regularly is that of the 'au-pair' babysitter. This requires providing room-and-board, so it's not doable without a multi-bedroom house or condo. But even if we had the appropriate conditions, I'm not sure how I'd feel about the whole thing in general. You feel bad enough sending your kid to daycare without having a "substitute mom" around 14 hours a day.

Where did the time go?

This past Saturday, my gf and I went for a walk along a beautiful nature trail that happens to be near our place. We buckled our now 10-month old son into his stroller and started along the path. Around the 25 minute mark, he began shrieking [1]. See, he doesn't like to be in his stroller for very long - he'd rather be held. Except that that doesn't last very long either, and then he'd rather crawl around on the ground for a while... You get the idea. 

However, what I really wanted to discuss was this couple that we saw as we were walking back to the car. They had a kid about a the same age as ours, sitting in one of those three wheel activity buggies, and they jogged past us. They were JOGGING. I couldn't believe it. It was was so mind-blowing to us that we're still bringing it up days later. 

There's a reason that I haven't blogged, or posted much on Facebook, or done anything much, really, during the past three months: I'm always exhausted. I mean really tired - more tired than I think that I've ever been.

The baby books I read suggested that we could expect our kid to start sleeping through the night sometime between three and six months. Ha! We're lucky if he sleeps three hours in a row [2]. I'm probably getting about 6 hours of sleep per night at best, and my gf isn't even getting that. It's kinda crazy because we'd been doing so well from months three to seven. Then it all started going downhill.

It's amazing how chronic sleep deprivation will sap your productivity and ambition. Work, home, life in general - you name it - I want to get more done, but I just don't have the willpower. I have bags under my eyes. I've fallen asleep at work [3]. I've got a lot to write up at work, and forming coherent sentences is surprisingly challenging.

I can't imagine doing this as a postdoc. The baby needs constant attention and given that my gf is more sleep-deprived than I am, it wouldn't be fair for me to leave her alone with our son so that I could get more work done. It's a good thing that work-life balance is better in industry, but I still feel like I'm doing all the running that I can do to stay in the same place.

I think that both my gf and I feel like we're losing touch with the outside world. It's conceivable that people actually think we've moved away or something. Everyone is still telling us that this phase will pass, and that eventually we'll start getting more sleep as well as being able to do more in general. I cannot wait. 

[1] Like clockwork.

[2] Don't worry, we've been to all of the necessary pediatric checkups. He's fine - but, unfortunately, an unstable sleeper.

[3] Conveniently, we have these beds that anyone can use for napping purposes. Silicon Valley tech companies, eh?  

Kickstart my heart...

Lately I've been backing an increasing number of projects on Kickstarter and Indiegogo. I really enjoy the concept behind these companies - even if I've already been burned once (more on that in a moment).

I've (frequently) voiced complaints along the lines of "why don't they make things like this anymore?", or "I wish someone would fix this first-world problem"[1]. In particular, this is often the result of my having fairly 'niche' interests: it's often not worth it for large companies to cater to small audiences, even if the market is still profitable.

That's where 'project backing' websites give me (and other weirdoes) the opportunity to put our money where our mouths are. If I really want a card game based on Cyanide & Happiness, or a mealworm farm because I'm sick in the head, then I can pay for those[2]. 

But alas, every rose has its thorn, and there is a dark side to backing projects. We all know that a 'pledge' towards a project is actually really a 'donation'. Just because the campaign says that a $25 dollar pledge will net you one shiny copy of a game, album, or art print, it doesn't guarantee that the creator will actually follow through. Unfortunately(?) people have come to expect to receive what they back, and Kickstarter and Indiegogo have transitioned from being 'investment platforms' to pre-order services.

I was recently burned by Kickstarter campaign for the 'Unofficial' Bay Area Expansion for Cards Against Humanity. I seems as though its creator took our money, had the cards manufactured, and then proceeded to sell them on eBay without actually fulfilling the backer rewards promised to those who pledged (see the comments on the KS page, which are nothing but vitriol).

Losing $25 sucks, but it did remind me of a valuable lesson about life in general: caveat emptor. In the case of the CAH expansion, the creator offered no relevant experience attesting to his ability to see the project through. As a counter example, I met a creator at Maker Faire who showed me a demonstration of a prototype drone that he was soon going to fund via Indiegogo. He'd also successfully funded two previous campaigns and received positive reviews. I backed his campaign last year and got my drone this past week. It works great, but alas, I cannot fly it outside of my tiny apartment[3].

When I told my girlfriend about having lost money backing a Kickstarter, her immediate reaction was to point out how frivolous these things are and why I shouldn't have wasted my money on them. Despite this, I find myself undaunted. Yes, I'll probably be a bit more discriminating than I already have been (I haven't backed any perpetual motion machines or laser shaving devices), but as long as there continues to be a fine selection of minimalist wallets and geeky card games[4], I'm in.  

[1] Or "Get off my lawn!" 

[2] Reaching niche audiences isn't the only function of these websites: charities and artists also seek donations as well. Given that these functions were available prior to the emergence of sites like Kickstarter, I don't see them as the primary function of this new marketplace. Others may disagree. 

[3] I backed the Micro Drone 3.0 project back when I lived in Redwood City. Since then, I've moved to San Bruno, and the FAA has issued guidelines banning drone flight within 5 miles of an airport. Since I now live ~1 mile from SFO and am surrounded by hills, I'd need to drive about 20-30 mins away before I can fly my damned drone :-(

[4] From reputable creators, of course.

Musings on family housing...

If there's one topic that's guaranteed to come up in every casual conversation in the San Francisco Bay Area, it's the cost of housing. I assume that most people are aware that rental rates in and around SF are crazy. But just in case, the median monthly rent for a one-bedroom apartment in the city is somewhere around $3,450. At such a price, a net salary of $41,400, wouldn't leave you a single cent with which to eat.

It's easy to compare this rental rate to that of other cities and gasp in horror. However, such aggregate comparisons can be misleading as what you'd really like to know is not the relative median prices, but rather the relative cost of similar apartments in different cities. I suspect that the median apartment quality in SF is lower than that of many other cities[1]. 

As a personal example, we recently moved into a ~$2,000/mo complex in SF's suburbs. Upon taking occupancy of the unit, we discovered that it didn't have a working phone line. This had to be repaired by AT&T technicians who noted that the wiring in the unit was terrible, leading to crossed wires and shorts (hence three weeks sans internet). This is important, because you generally don't test the wiring of places during the open-house tour. It boggles the mind: what is being done with these rents? It's clearly not upkeep.

Another of my suspicions is that part of the reason that rental prices are being driven up is because most techies are young and don't mind essentially lighting a large fraction of their salaries on fire rather than investing it in a house or condo [2]. On the other hand, if you're a couple of thirty-somethings who just started a family, investing in a home is the kind of thing you start to think about. For kicks I took a look at some places nearby. Crappy homes start at about $1,000,000 and go up from there. After putting a downpayment on something like that, I'd be pretty worried about a potential drop in housing prices.

At some point, our son is going to need his own room and even maybe some space to play. Someday we're going to want to send him to a decent school, which usually involves living in the right area, and so on. I enjoy the Bay Area, but in many ways, it's not very family-friendly.

Realistically, this leaves us with two options: a) we could consider the East Bay. However, as commutes across the various bridges are an hour or more each way, I'm disinclined to pursue that option. Or b) we could look into buying a place in the Necropolis of Colma. Housing prices there seem oddly depressed, which leads me to believe that they're having some sort of problem with the undead.

Zombies or no, it's the property values that I find terrifying. 


[1] I have no good evidence for terrible apartment qualities in SF (though a Google search seems to support it). However, I'm inferring it because 1) the insane demand for apartments means that (slum/land)lords need only do the barest of maintenance, 2) paralyzing restrictions on any type of building or renovations ensure that most buildings are old and decaying, and 3) the relatively mild climate means that old drafty buildings are more tolerable than elsewhere.

[2] The go-to mantra for Bay Area natives is to blame Silicon Valley for all of their housing woes. However, Econ 101 suggests that the over-inflated housing prices are likely a symptom of demand exceeding supply. See The Gated City for a good overview of how homeowners have pushed for an incredibly restrictive building code that prevents construction of modern housing and ensures the ever-increasing value of their properties.

Book Club: Risky Medicine...


In my ongoing attempt to better understand the inner workings of the US healthcare industry, I've read yet another book: Robert Aronowitz's Risky Medicine (2015). Within, the author argues that medical practice has undergone a consistent shift away from treating disease and towards 'managing risk' via an ever expanding array of preventative screenings and medications. To summarize the message: Risk reducing drugs are incredibly appealing to pharma companies as their markets are vast, advocacy groups often blindly support screenings irrespective of their efficacy because they help manage fear, and regulation generally has a much lower bar for risk-prevention than actual disease treatment.

It can take a long time for epidemiologists tease out these factors, and the true value of preventative services often take decades to evaluate. By then, they can become entrenched within the culture of advocacy groups, who will often trot out versions of the above, naïve statistics in order to argue for continued screening. Furthermore, the above example (and many advocacy groups [2]) focus entirely on sensitivity of screenings to the exclusion of specificity: no test is completely without false-positives, which can offset the value of the test if the condition that it's trying to detect is sufficiently rare (see positive/negative predictive values).  

Another major argument made by the book is that the standards for approving risk-reducing drugs are too low, thus ever increasing the pool of preventive treatments. Disease risk factors identified during the course of clinical research are rarely as predictive at the whole population-level as they are in study groups. Furthermore, statistically significant association says nothing about the magnitude of the effect of reducing risk factors. Nevertheless, when pharma companies develop a drug that reduces a risk factor for a disease (say blood pressure), they need only show safe efficacy in targeting the risk factor, and not that it actually lowers incidence of the ultimate disease. Therefore, as above, drugs can become entrenched in medical practice long before we realize that their efficacy is marginal or non-existent.

While thought-provoking concepts, Risky Medicine never gave me a sense of the actual magnitudes of these issues. Most of the discussion is theoretical, calling for increased skepticism in the face of new preventative strategies. While some examples of unnecessary focus on risk are discussed, the case-studies given chapter-length treatment are quite-complex, spanning a large range of issues that muddy the main message and make it difficult to form an opinion without more knowledge. This likely explains why ~45% of the book consists of detailed footnotes, which aren't citations so much as additional background necessary to understand the circumstances being discussed [3].

Controversies such as the efficacy of the prostate-specific antigen test or hormone-replacement therapy in post-menopausal women indicate that both the definition of 'risk' as well as how we determine what risk factors are worth addressing are worthy of continued consideration. Furthermore, as the Aronowitz indicates, we haven't comprehensively addressed the quality-of-life effect of moving millions of people out of what was classically regarded as a state of 'good health' to an endless maze of varying levels of risk. Hopefully, someday I'll read a book that discusses these topics more cogently, with explanations of actual research.


[1] In reality, values are rarely this large and obvious, and it's often more complicated. For example, according to Aronowitz, the tendency in the medical community has been to broaden the definition of diseases over time, generally increasing the number of diagnoses regardless of treatment efficacy and therefore further reducing the apparent mortality.

[2] According to the author, patients rarely complain about false-positive diagnoses, even when it's clear that such a case occurred. For example, if a test falsely suggests that a patient has cancer and leads to painful biopsies that rule out disease, it's more likely that the patient will feel elated for having 'dodged a bullet' than ask why the original test was positive.

[3] Only at the very end of the book is it indicated that its various chapters are collected from previously published essays, chapters, and papers. Unfortunately, it doesn't make for a very clear presentation of the argument and leads to a lot of repeated examples and redundancy.

We need to re-think the Ph.D...

A few months ago, I wrote a post about the differences between academic and industry interviews (including some advice for the latter). Since then, I've been involved in more interviews and I've come to the conclusion that the field of academia from whence I came is largely failing Ph.D. students.

One of my frustrations with academic research was a lack of appreciation for details. The 'point' of everything seems to be the results themselves - which generate publications - rather than the process of getting to those results. Papers in genomics often illustrate this: many publications choose arbitrary thresholds for analysis without explanation, or don't bother explaining why they chose this statistical test and/or normalization method over others, as just a few examples. I suppose that you can argue that it doesn't really matter: you're probably going to accept a false-discovery rate of 10%+ anyways, and ultimately, nothing's going to happen if you have a bunch of false-positives/negatives in your data.

Things are quite different on the industry side. The results that you generate are directly responsible for your livelihood as well as that of your co-workers. The point isn't to meet some minimum publishable standard of analysis, but rather to convince yourself and others that your results are legitimate. Consequently, it's no surprise then that good companies want you to prove that you are an expert in the methods required to generate the results that you've touted on your resume.

Which brings me to the title of this post: shockingly few Ph.D.s actually understand the details of the methods underlying their work. They can probably cite every paper published about their chosen discipline, but when pressed they'll admit that they analyzed their data like this because they were told to do so by a postdoc, or that they performed such and such normalization/analysis/test because it's what another paper did.

I completely understand - I spent 6 years in grad school and another 5-and-a-half as a postdoc. I've actually seen PIs tell students to skip the details of their analyses during lab meetings because they're not interested; they only want to see results. Furthermore, I've never seen a lab where members are actively encouraged to spend time at work improving their skills rather than performing experiments and analyzing data as quickly as possible [1].

As we all know, 85% of Ph.D.s will not be going into academia, and I expect that this percentage will only grow as time goes on and academic jobs continue to become less and less attractive. So regardless of the underlying factors (publish-or-perish, etc.) by focusing on results rather than skills, academia is leaving most of their trainees ill-prepared for the job market in which they will find themselves [2].

If you think that I'm blowing things out of proportion, then consider the following observations: most industry job postings require candidates to have 2-5 years of postdoctoral and/or industry experience above the Ph.D. in order to apply for a scientist position (rather than a technician). Also, my own employer interviews many, many candidates in order to fill each position, and a very common complaint is that candidates fail to show that they understand the methodology upon which their work is based.

The saddest aspect of all of this is that I've been hearing versions of these complaints since I was an undergrad: most university grads aren't going on to become professors, so why are we training all of them as if they were? My fear is that we're just going to accumulate more and more unemployed Ph.D.s until the system breaks under their weight.     

[1] 1) I assume that PIs generally believe that you'll learn by doing, but there's a surprising amount of stuff you can accomplish by jury-rigging random stuff off of the internet while learning very little of substance. 2) Labs that encourage such 'personal development' must exist, but have any biologists ever seen anyone give a weekly update at lab meeting about how they made their code more elegant, or efficient, or that they generalized it and shared it on the lab server? This should be part of the culture. 

[2] There's a stronger case to be made here: I honestly think that academic labs are under-performing because their members aren't learning the most efficient ways to accomplish their objectives. There's a total lack of knowledge-sharing among members of many labs and a lot of reinventing the wheel ad nauseum

Three-plus months in...

The books that my gf and I have read about parenthood all refer to the three month mark as something significant and auspicious. For instance, all sources agree that the three month mark is when babies may actually begin sleeping through the entire night. Unfortunately, we're not quite there yet.

In fact, our experience has been that things have been getting more difficult as time goes on. During the past month, our son has been getting much fussier, requiring that someone hold and gently bounce him constantly [1]. I'm serious: with the exception of the few minutes when he'll play on his play mat in the morning, someone has to be holding the baby during every waking moment. This is particularly exhausting as the baby wants you to walk around with him - sitting down provokes an immediate freak-out [2]. We knew that this would be rough, so we were operating on the assumption that this was just a hump that we'd be getting over eventually.

Thankfully, this seems to be the case: for the past couple of weeks, our baby has been much more agreeable, even tolerating long walks in the Baby Bjorn and drives to run errands. We're still noticing some regression towards fussiness once-in-a-while, but being able to get out of the apartment for a bit during weekends has been amazing.

We can only hope that it's going to get easier from this point in, because reflecting on it now, it's difficult to overstate how much of a complete productivity-killer a newborn can be. It's pretty-much impossible to get anything done at home - yes, the baby will sleep for brief periods, but the frequency and duration of these naps are relatively unpredictable [3]. You could take advantage of the baby's early bed time, but then again, it's unlikely that you'll have much energy left. Seriously, I rarely went to bed before 11 pm before, while over the past several weeks, our entire family has been under the sheets by 8:30 multiple times. Taking care of a baby is exhausting, especially for mom.

All things considered, life is slowly inching towards some version of 'the beforetimes': I've been able to play a few videogames here and there, and it only took me two weekends to write this short blog post. Happy times! 


[1] Here's a life-hack: The Fitbit Charge HR that I purchased to 'game-ify' my attempt to lose the five pounds that I've gained since the baby was born, records knee bounces as steps. 

[2] The nice thing about the first few months was that at least we could sit down and watch TV or read while the baby fell asleep in our arms.

[3] Which brings me to a geeky sub-rant: Adults require that every videogame allows a) pausing at any time, and ideally b) saving at any time. Modern technology can easily accommodate these features, and games that forbid pausing/saving in order to 'increase challenge' are clearly targeting kids and the unemployed.   

Book Club: The Patient Will See You Now...

Eric Topol is a cardiologist known for his advocacy for technology-based disruption of the healthcare industry. I heard him make some provocative statements about creative destruction in medicine on the Econtalk podcast, and since I'm now now working in the broader healthcare industry, I decided to read his book, The Patient Will See you Now (2015; Basic Books).

I'm not sure for whom this book is intended: it covers a lot of ground, and many of the technical concepts that it discusses are far more controversial than presented. Topol tries to tackle a multitude of weighty subjects in a single book, and I'm sure that the general exuberance for all things 'omic' and 'big data' are going to ruffle a few feathers. In the interest of my time and yours, I'll comment on three major themes.

Paternalism in medicine

The first section of the TPWSYN(?) criticizes the problem of pervasive paternalism in medical practice. While all professions are expected to be self-promoting (and self-serving), medicine is somewhat unique in its degree of self-congratulation and self-importance. In particular, Topol is critical of the field's lack of interest in 'democratizing' the healthcare process: essentially, there's a lot of information available for patients to make informed decisions about their care, but they rarely have access to their own medical data [1].

I understand why physicians could be weary of too much patient 'involvement': doctors already complain about patients citing 'Dr. Oz' when questioning diagnoses and prescriptions, and it's easy for desperate patients to fall for misinformed woo that they read online. But ultimately I agree with Topol: patients are already organizing support groups and sharing information online and MDs can either be there to shape the process, or allow it to happen without their involvement (the lack of practitioner involvement goes a long way to explaining why so-called Electronic Medical Records, or EMRs, are so physician-unfriendly [2])

Omics will revolutionize everything

Anyone who's followed the literature on things like genome-wide association studies (GWAS) [3] knows that, for many complex diseases, they've been quite controversial and/or disappointing with little of the phenotypic variance explained by genomic factors (see Visscher et al. 2012, for example). They're also very expensive. Regardless, Topol presents them without any controversy as if they're going to explain the root causes of everything - he's firmly on the side of 'we just need more data'. 

But that's the rub: the root cause of every disease isn't purely genomic. Rather, disease phenotypes result from the interaction between genes and the environment. Furthermore, no law says that these interactions need be 'additive', so saying that this disease is 40% genetic and 60% environmental doesn't make sense. More data may be good from an academic perspective - but much more work needs to be done to translate this into clinically actionable findings - there's a big difference between the statistical significance of an effect and its magnitude [4] .

It's also worth pointing out that a major challenge in applying the results of GWAS in a clinical setting is that in addition to the results being sample and size-specific, they are also often very population-specific. This means separate studies are required to identify risk loci associated with cancer in caucasians (the most well-studied group), versus africans, or asians, or latinos, etc. So unless the diagnostic value of these studies increases dramatically, it may be difficult to justify the costs.

The smartphone as the all-in-one medical diagnosis device

TPWSYN spends a lot of time discussing how technological advancement is shrinking the cost and size footprint of complex medical devices. In particular, there are apparently several excellent proof-of-principle technologies that can attach to your smartphone and collect information on things like blood-pressure, temperature, or the visual status of your inner ear, nose, or throat, among others. Via software and/or telemedecine, there's a possibility that such devices could allow routine diagnoses of minor conditions without the need for expensive, time-consuming hospital visits.

Clearly, there's a lot of exciting potential in such devices: as an example, diabetics have been able to monitor their own blood-sugar levels for years now. However, I think that this type of technology brings up one of the major caveats of the entire book: 'More data' is only useful if clinicians know what to do with it. Consider the following: maternity wards have largely adopted continuous fetal monitors that affix to the mother's belly, over the traditional 'checking in' every so often. This has coincided with a large spike in the number of unplanned, emergency C-sections. However, there  has been no corresponding drop in rates of infant mortality. Most likely, continuous monitors exposed a large number of 'normal' fluctuations in prenatal heart-rates and contractions, which spooked unfamiliar medical staff into performing unnecessary operations [5].  

In the fullness of time, we'll likely figure out how to perform analytics on 'big data' in order to produce meaningful effects on individual patient outcomes (not simply 'statistically significant', but actually noticeable in magnitude at the individual level). A lot of this is going to come from combining 'omics' and monitoring with work unraveling the underlying mechanisms of disease. But the results that one obtains from data are only as good as the data themselves, as well as the hypotheses under which they are interpreted. I'm not sure whether the best place upon which to focus the bulk of our efforts is in collecting ever more data of untested quality. 

Ultimately, much of what Topol discusses in his book will likely come to pass - at least in implementation if not in actual value to patients. But without serious discussion of the subtleties of the underlying science, it amounts to much more hype than information.

[1] Topol also criticizes medical associations for levying non-evidence-based criticisms against things like allowing registered nurses to handle diagnosis and prescription in 'routine' practice.  

[2] See The Digital Doctor, by Robert Wachter.

[3] e.g. The entire journal called Nature Genetics.

[4] Consider the types of results that you (used to) get from 23&me: If you have a variant that increases your risk of disease X by 2%, are you going to change anything about the way you live your life? Would it even help at an individual level or are you only going to see an effect in aggregate?

[5] See Expecting Better, by Emily Oster.

The return of the 'Book Club'...

One of the best things about the past nine months is that I've been getting back into reading for pleasure, as opposed to reading for work [1]. I've been reading so much, in fact, that I splurged a bit and got myself a Kindle Paperwhite to replace my aging 3G. I can't stress enough how useful an illuminated e-reader is when you're trying to soothe a baby. 

But I digress. During my Ph.D, I maintained a blog - sadly, now defunct - in which I used to write regular 'book club' posts, discussing books that I thought would be of interest to folks other than myself. I think I'm going to start that up again here with the book I'm currently reading. But for now, I'd like to give a brief shout out to a couple of interesting things that I've read during the past few months.

The Signal and The Noise, Nate Silver, 2012.

Nate Silver's fivethirtyeight blog applies quantitative analysis to topics that other news organizations typically treat qualitatively, especially politics. His book focuses on forecasting: more specifically trying to identify the factors that lead to good predictions (empiricism, Bayesian inference, rigorous self-criticism and identification of distorting incentives, etc.) versus those that lead us astray (personal biases, poor/overly complex models, hidden variables, etc.).

Of note, Silver is highly critical of the lack of proper probabilistic presentation of predictions, such as how the multiple predictions of the severity of climate change's effects are implicitly presented as equally probable scenarios. As he discusses, the most catastrophic outcomes are on the tail end of a probability distribution under any reasonable model. Focusing on 'worst-case' scenarios simply make the models incongruous with the observed trend of temperature increase (I should note that Silver is not a climate change skeptic).

Finally, Silver is one of many people who point out that simply looking at the success rate of any talking head's prediction history should be enough to stop every news organization from taking their claims at face value every single time [2].

It's an interesting read, though frequentists may get upset.

Boss Fight Books, Various, 2014-

Anyone who knows me knows that I love videogames. However, I quit reading gaming magazines and websites several years ago. For some reason, in comparison to writing about other media (books, movies, etc.), games-writing is uniquely intolerable. I don't think that it's simply the result of a young medium finding its legs. Rather my frustration comes down to two factors: First, a lack of history wherein young gamers don't play old games and interpret everything through the lens of their own sollipsism, and second, a pervasive culture of hyperbole in which nuance is discouraged and everything is the best thing 'evar!!!1!11!1!!!!'    

Boss Fight Books launched as a Kickstarter with the goal of writing a series of long-form essays (~100 pages) each centered around a single classic game. Each book's approach is unique, with some written in standard historical chronicle style (à la Masters of Doom), while others range from discussions of the game's effect on the author's personal life, to in-depth dissections of a strategy game's mechanics and underlying code.

While the writing in a few of the books is overly experimental and off-putting - Galaga is particularly weird [3] - the majority are fascinating little reads. At first I thought that nostalgia would make me prefer the books about games I'd played as a kid, but I've been surprised to find that my favorites, Jagged Alliance, Bible Adventures, and ZZT, are all about games that I missed out on... but would love to revisit if and when I get more free time. 

If you enjoy videogames, I'd recommend trying out a few of the books. They're still publishing going strong, and I'd like to support them simply because, unlike every other classic game discussion on the internet, they're not focused solely on console games from Japan.  

[1] Of course, the two are not mutually exclusive. For instance, some of the books that I've read have been about coding and data analysis. Others have been about the inner workings of the American healthcare system, which classifies as both 'horror' and 'mystery'. On the other hand, I have read a bunch of pulp-fantasy, which is unlikely to help me at work unless we are attacked by orcs or trolls. Thinking about it, I'm not sure how what I've read would help in this hypothetical situation...

[2] Of course, this assumes that the primary purpose of the 'news' is information and not simply entertainment.

[3] The Galaga book is written as 255 mostly independent paragraph-long tidbits weaving advice on playing the game and memories of classic arcades with stories about the author's sexual abuse at the hands of his father. It's a courageous effort to be so open about such a difficult part of his life, but its lack of flow is jarring to the reader. I suppose that as an art piece, that may have been its objective.

Time, or lack thereof...

One of the best aspects of my new job is having a lot more free time. Don't get me wrong, I doubt that any job like this supports a nine-to-five work week, but compared to the hours that most postdocs put in, it's been really nice. I can't stress enough how important this free time is to productivity. While I have been playing videogames and building Legos [1], not feeling completely burnt-out by work demands has also made it much easier to spend free time developing work related 'skillz'.

For example, I've read book and played around to increase my programming abilities so that I'm now familiar with thing like pysam, pandas, matplotlib, and git/GitHub, which can all be added to my resume. Therefore, I'm continuing to support my new mantra that brute-forcing your way through problems and analyses never pays off as well as learning how to do it well. Sadly, this has changed significantly with the arrival of our baby.

Look, based on everything I'd heard and read, I expected that a baby would take up a lot of our time. Our reality seems to be a bit more extreme:  our baby takes up ALL of our time. The books say that newborns sleep an average of 18 hours a day in 2-3 hour bursts, in between which you need to feed and clean them. Our son has never slept more than ~10 hours in a day. Rather, he'll frequently stay up for 5-6 hours at a time and requires constant attention to avoid getting upset.

Under such circumstances, it's neither easy, nor fair to hand the baby off to his mother so that I can get work done [2]. Furthermore, it is important to me that my gf and I take care of our baby: it's nice to have family in the area or pay for a babysitter now and then, but I wouldn't want to have a full-time assistant raise my kid.

As I'm sure most parents can relate, it's often stressful. It feels like you're caught between a rock-and-a-hard-place, wanting to answer emails and get work done, while at the same time feeling guilty if you're not taking care of and/or spending time with the baby. I'd been told that there's no 'right' time to have children, which is now obvious: when is the right time to give up all of your free time and then some?

Thankfully, I've heard a rumor that babies grow out of this eventually.


[1] A few years ago, the Tested.com podcast revealed to me that there are actually adults who build Legos. I bought my first kit in years for 'shits and giggles' and discovered that putting blocks together is incredibly relaxing and therapeutic. I'm hooked, and am saving all of the blocks and manuals for when our son is old enough to play with them (which is 3+ according to the packaging).

[2] Mind you, sometimes 'tag-teaming' is unavoidable, if only so that we can catch a bit of extra sleep.

Rant: putting a little effort into public speaking...

I'm continuously baffled by how little effort scientists put into public presentations. It's easy to downplay the importance of talks when there are so many other constraints on our time, but, we need to take into consideration the sheer amount of collective time that bad talks are wasting.

It's odd that there seem to be no incentives to improve the quality, or most importantly, the timing of talks. For instance, I can't count the number of talks that I've attended where the speaker's gone way over time [1]. Conversely, I can count the number of times that I've seen someone ask the speaker to stop on one or two fingers. Scientists have never struck me as sheepish about offending colleagues' feelings when it comes to reviewing papers or criticizing work during lab meetings. And yet there seems to be some kind of universal ban on offending folks, or even providing constructive criticism regarding presentations.  

I wish that it would become culturally accepted that unnecessarily long or uninformative presentations waste the precious time of every single attendee. It should be acceptable and expected that a moderator first give a speaker a signal that their time is coming up (a five-minute warning, for example), and then politely cut them off when that time arrives. I have a feeling that people would feel embarrassed to be cut off, which would provide at least some incentive to do a better job putting together their talks [2].

Here's a general observation: despite over a decade in the 'biz', I have never heard a seminar attendee describe a presentation as 'too short', while the converse is as regular as clockwork. This is probably an excellent indication upon which side to err when prepping a presentation. 

Finally, I'd be remiss not to bring up two personal pet-peeves about seminars:

1) I've noticed a trend towards a particular presentation style that I call the 'look how much work I've done!!!'-talk. This is where the speaker focuses on telling you about the effort they've put into something, usually by presenting a lot of slides without going into detail about any of them. In my experience, this is always a bad idea. It's much better to focus on one aspect of a project in sufficient detail to convey why it's important, and why people should care - both of which are rarely as self-evident as people would like to think.

2) The purpose of overview slides are to help the audience put the various parts of a talk into context. However, I notice that most people use them as a long-winded abstract. On top of taking up valuable time, I don't find it helpful to receive a barrage of concepts all at once, before they're properly explained. Again, I'd focus more on why it's important, and why people should care at this point. Also, I don't think that any talk shorter than 30 mins needs a minute-long overview slide [3]. 

P.S. I think that these concepts should apply to all talks - not just big, public seminars. No need to waste time polishing lab meeting presentations, but it's no less important to be considerate of your audience's time.

[1] I've actually been to a conference where our entire session had to miss dinner because we we're so ridiculously behind schedule.

[2] While practicing a talk before the official delivery is ideal, I don't think that this is required. With a bit of experience, you can develop pretty reliable rules-of-thumb about how long you should spend on a each background or data-heavy slide and so on.

[3] I know that a lot of people disagree with me on overview slides, but I've seen so many talks begin with a 'First I'm going to give you an introduction to X. Then I'll talk about some of the results I've obtained, before discussing their implications. Finally, I'll end with some conclusions'-slide. I don't think that we need to be reminded of how a talk works. If it's not helpful, it's unnecessary. 

'American maternity leave' or 'Reason #127 why being a postdoc sucks'

There's been a lot of talk in the media lately about how maternity/paternity leave allowances (or lack thereof) here in the States pale in comparison to other countries. Essentially, there's no 'guaranteed' paid leave at the federal level, and individual states vary with respect to their rules (as with everything here, they tend to be complicated).

Despite mandated rules, individual firms can offer improved leave, and tech companies have been making headlines about their generous benefits. Since I work for a Silicon Valley tech firm, my paternity leave has been painless. My gf, on the other hand, is a postdoc.

Prior to taking time off, she attended an informational seminar about how leave works. She was told that the university's disability insurance would cover between 55 and 70% of her salary during the first ~8 weeks of leave [1]. The difference between the high and low coverage would come from whether she had paid into 'State Disability Insurance', something neither she, nor I, had ever heard of.

This week she received a letter explaining that she would be receiving 55% of her base salary, up to a maximum of ~$680 per month. Let's all think about that for a second: they're basically saying that they'll cover 55% of a maximum annual salary of  ~$15,000!!! The rent on our undesirable one-bedroom apartment is ~$22,000 annually, and that doesn't include food, gas, and all of those other things you suddenly have to buy now that you have a baby. Apparently, the HR folks weren't aware that such a maximum existed. 'Did we say 55%? We meant more like 17%. Oopsie!' More seriously, if I hadn't scored this 'adult' job before we had the baby, we would've been digging into credit cards and throwing emergency fire-sales not to end up on the street. 

I'm not sure if folks realize this, but since the American Association of Pediatrics recommends that you breastfeed your child every 3-4 hours round-the-clock, and lactation consultants recommend not bottle-feeding before ~4 weeks, it's pretty difficult for mom to get back to work for the first month-and-a-half. So while a male postdoc can be back in the lab in a matter of days, retaining that full, awful postdoc salary [2], postdoc moms get to struggle through all of the added responsibilities of motherhood while dealing with a few months of below minimum wage income.

I suppose that the solution is not to have a kid until you're done postdocing!

Stay classy academia.  

[1] Her first week was actually unpaid due to some weird concept of a 'waiting period', which neither of us can understand.

[2] Now in the thick of it, I am blown away by the idea that people go back to work within days of having a kid. I went back after 2 weeks and it's been pretty tough: first, because our son still doesn't sleep much and requires almost constant soothing, and second because I'm not getting enough sleep. But then again, I want to make a good impression at work - someone has to pay the bills (see above).

Should we agree to disagree?

This is a follow-up to a previous post wherein I complained about the lack of consistency in childcare advice in the maternity ward. We've since spoken to a pediatrician who told us that the number one complaint to healthcare professionals from new parents is the lack of consistency in information. Nowhere is this more in evidence than in childcare/parenting websites. 

Of course people can blog and share information about their parenting philosophies all they want. But some of these sites offer cross the hazy boundary between some random person's opinion into actual healthcare advice. While some sources are maintained by healthcare organizations and present expert opinion [1], a little digging reveals that more than a few are run by random moms who extrapolate from their own anecdotal experience [2].

See, it's not like we found these sites from random internet searches. Rather, multiple care-providers have given us pages of 'useful' resources. One such page of links contained several instances of contradictory advice, depending on which site we visited. Similarly, if we speak to our pediatrician or lactation consultant, we'll get yet more differing information. We're not looking for opinions on how to raise kids, here. Is it really so difficult to find out how much he should be eating?

My gut tells me that most of the stuff we're worried about is nothing, but you also have to keep in mind that our family is pretty sleep-deprived and antsy at this point. So when one source is telling us that we absolutely need to do X, while another says that we absolutely should not do X, it's more than a bit frustrating. I'm on the side of trusting our doctors, even though the handouts that they've given us contradict their advice.


[1] This is a much broader topic, but I think that we scientists don't give people enough credit for their mistrust of 'expert opinion'. It's a bit ironic how an healthcare organization can, in the same paragraph, 1) now strongly recommend that all babies be breastfed and, 2) point out that they spent years discouraging women from breastfeeding. Or how Dr. Spock used to recommend that babies sleep on their stomachs, while it's now 'obvious' that they should sleep on their backs. It would behoove folks to be more careful about making policy pronouncements. I suppose that researchers are people too and hedging suggests lack of confidence.

[2] There's value in sharing experience, but there is also a hell of a lot of woo and conspiracy theory on these sites. I've noticed a few instances of pairing mother-to-mother advice with organic/anti-GMO screed, for example. 

The reluctant minimalist...

I used to read voraciously. In fact, I kept an annual list of all of the books I'd finish, and most years I'd average more than a book a week. Most of the stuff I read as a kid was pulp-fantasy, but during grad-school I switched to reading a lot of science; both technical monographs as well as general audience stuff. Eventually, I accumulated a prodigious library, which I've managed to cart around North America at substantial personal expense.

A few years ago, realizing that moving more books into my tiny American apartments just wasn't practical, I went the way of the Kindle [1]. Thus while not growing physically, my collection of tomes has been patiently sitting in the closet for the day when we move into a larger place, thereby enabling me to construct the library of my dreams. 

However, as with all aspects of life, the baby changes everything. It's amazing how much space an 8 lb human can occupy. We're quickly landing upon the mantra of big-box department stores: shelf space is a scarce resource and therefore at a premium. It's past time to cut down on collecting crap that I'm not using.

I wonder if anyone at the donation place will actually want to read D'Arcy Thompson's,  On Growth and Form ? 

I wonder if anyone at the donation place will actually want to read D'Arcy Thompson's, On Growth and Form

With a heavy heart, I have begun purging my book collection of as much as I can bear, which is much more than it used to be. Over the years, I've become quite convinced that I'm carrying a little bit of the 'hoarder phenotype'. For example, I used to find myself worrying about what I'd do if a particular book went out of print. Obviously, the correct answer is, 'Who cares? If I haven't revisited it yet, it's probably not a big deal'.

I've also begun to think about that strange sense of pride that comes from displaying one's collection(s). Do we really expect people to be impressed by the number of movies, albums, videogames, etc. that we have on our shelves? In this era of Netflix and digital music, it seems even more odd to be proud of one's commitment to buying every single superhero movie or whatnot.

So, it's an uphill battle, fighting against this crippling desire to shove stuff onto my shelves, especially now that I've got more disposable income. Let's begin with baby-steps - rather a propos, no? I'll work on trimming down my junk and, whenever possible, converting everything to digital. At the same time, we'll agree that I can keep one useless physical collection going strong. I could never part with my videogames, of course...

[1] Super pro-tip: If you're going to have a baby, make sure you get an ebook reader (preferably with a backlight). You will rarely have more than one hand free, and at ~2 a.m. reading is a good way to pass the time as the baby falls asleep. 

The second crossing...

I decided to name this blog 'Crossing the Rubicon' because I knew that 2015 would be bringing big changes to my life. The first was my decision to leave academia and pursue a career in industry (still no regrets!). The second happened this past week, when I became a father.

Some folks who've known me for a long time were shocked to hear that my gf and I were having a baby. To be honest, it's not until relatively recently that I even realized that I wanted to start a family. I'd spent so long without any form of job or 'locational' security that I honestly hadn't spent a lot of time thinking about it. But when I seriously began thinking about 'alternative' career choices, it opened up a whole new avenue of life-possibilities. This included generating the kind of stability that my partner and I wanted to pull the trigger on such a big move. 

Now we're here. I have no idea what kind of father I'll be. Cool, I assume, as I've already got a crap ton of Legos and video games for my son to play with when he gets older. I've also already begun reading things like Plato to him - the Dialogues mind you, we'll save The Republic for when he's more than a few days old. 

Now for a couple of observations about our experience with the whole maternity ward process:

Nursing staff seem to be obsessed with arbitrary thresholds. For example, babies lose weight during their first few days of life due to things like passing of retained stool and water loss. Most babies lose less than 10% of their birth weight before they shift to rapidly gaining weight. Our son lost just under 11% of his birth weight [2], which sent the nurses into panic mode. This led to them to admonish my gf for doing a bad job at nursing and not listening to their instructions, among other things. Conversely, the pediatrician wasn't worried about this number at all, rightfully pointing out that it's a threshold set by looking at the maximum weight lost by ~95% of babies. Less than a percent over the threshold isn't rare enough to panic.

In fact, one of the pregnancy books that I read, Emily Oster's Expecting Better, repeatedly emphasized this point: These are arbitrary thresholds, typically set at two standard deviations away from the mean of some measured distribution. Medical staff shouldn't be any more excited by what amounts to a P value of ~0.045 than a grad student should be about seeing it on the umpteenth test they've ran that afternoon. 

Secondly, we were somewhat disturbed by the lack of consistency between what we were told by different members of the nursing staff who could contradict one another multiple times in a single day. I'm not asking for super-human, encyclopedic knowledge of all things newborn in every nurse in the ward, but for sleep-deprived parents desperately trying not to kill the screaming, squirming thing that they're holding, it would be nice to feel like people weren't just making their answers up.

[1] Is this tautological?

[2] Some years ago, a grad student asked me what use the metric system would have in the lives of ordinary Americans. When my son was born, they weighed him in grams (3,570 g), converted it to pounds and ounces (7 lbs 14 oz), and wrote both on his chart. A few days later, a nurse came by and weighed him again - also first in grams (3,180 g) before converting to lbs/oz (6 lbs, 15.8 oz). She then looked at me and asked if it was more than a 10% weight loss. Instead of taking my word that he'd lost more than 357 g, she proceeded to open up some internal website and plug in the before-and-after lbs and oz values to get the dreaded news that the threshold had been crossed. Clearly the Imperial system's trouble with ratios, fractions, and percentages is not outside of the lives of 'ordinary' Americans. 

Interviewing, then and now...

One of the regular duties of grad students and postdocs is to interview candidate lab members. I wasn't ever actually trained to conduct proper interviews, but I primarily focused on figuring out whether folks would be a good 'fit' for the lab in terms of personality and work-ethic. The candidate's scientific 'skillz' were important, of course, but since such positions are theoretically all about learning new skills or improving old ones, specific knowledge seemed less important than evaluating someone's capacity to learn.

Flash-forward to the present: Now I'm interviewing candidates for industry positions and things are a bit different. As I was told by friends and colleagues who'd made the jump to industry, biotech companies generally try to find people with the skills required to fill specific positions [1], such as someone with the experience necessary to run high-throughput drug screens on cell-lines, for example. This is typically made clear in the job posting, which lists both required and preferred skills. Ultimately, while asking for a particular skill set is well and good, anyone who has looked over lists of job postings knows that some of these sets are about as common as unicorns [2].

Determining whether a candidate with fit in with the team culture remains as important in a company as it does in a lab. However, unlike in the lab, we aren't all working in our own vacuums. Folks in our team divvy up tasks, share resources, and collaborate on getting the job done. So, while the opportunity to learn continues to exist in abundance (I've been benefitting immensely from it!), there is a trade-off in hiring someone who may take months, instead of weeks, to get up to speed - even if they're brilliant and clearly a great fit. This is probably a good thing to keep in mind if you're ever frustrated with the interview process. Sometimes companies are focused on a specific set of skills, and not getting a job isn't necessarily a reflection of your quality as a candidate. Mind you, I realize that that's little consolation.

Switching topics a bit - I have noticed some little interview details that folks looking to get into industry may want to consider. First, I work with some of the smartest people that I've ever met. Industry is not the place where academic failures end up:  like in academia, publications and results show that you can get the job done. You're still likely selling yourself as a scientist, so think about how you're portraying yourself before saying that you're tired of 'academic research' or 'writing papers'.   

Secondly, company hiring committees are much more interested in the technical details of a candidate's work - after all, it's highly unlikely that you're moving to a company to continue your current research program. People will ask why you chose to analyze data in this way, or why you applied this or that statistical test. It goes without saying that everyone should understand the underlying logic behind their work. However, this is particularly important when the basis of the interview may be to determine how much they really know about that about that which they claim to be an expert. Saying that someone 'told you to do it like this' or that may be a deal breaker. 

Finally, from my own experiences with grad-school and postdoctoral interviews, they tend to be a bit more casual than their industry counterparts. This should be obvious, but dress up [3] and leave the lab/intra-field gossip at home.  


[1] In a Q. and A. with, Richard Sheller, former VP of Research at Genentech and now at 23&me, I asked whether he thought that hiring to fill specific roles in industry was a good thing. He told me that he felt that since biotech projects are always in flux, he'd been trying to steer his managers towards focusing on adaptability rather than specific skill sets. However, managers are responsible for getting projects done - so it was unlikely that much was going to change.

[2] For example, I saw a posting for a Ph.D. level scientist who had bench-top genomics experience, solid coding skills in at least Python and Java, and experience maintaining large-scale databases in mySQL. I'm sure such people exist, but wow.

[3] I'm amazed by how casually some people will dress when giving invited seminars or during faculty candidate interviews. It's a fine line: I don't think that we should all be wearing suits and ties, but graphic tees and sneakers are a bit much, no? The awkward flipside, of course, is that if you're interviewing in Silicon Valley, all of your interviewers will be dressed like they're going to a BBQ. Them's the breaks, I guess.

Learning to do things well...

As I mentioned in my post explaining my decision to leave academia, one of my frustrations about academic research was that I felt like I never had the time to figure out how to do things properly. That's a bit of a nebulous concept, but here's what I mean:

In most of the labs in which I was a member, a large part of my work was computational. I, like almost everyone else, was self-taught in terms of how to analyze data. We either figured it out on our own, or we shared tips on how to get things done. Most of it 'worked'[1], but as everything was cobbled together, there was no guarantee that it was particularly efficient. We got things done, but not necessarily done well

When I started my new job, my first task was to analyze some next-gen sequencing data. This is something that I've been doing for >5 years, so no sweat. I started running some scripts and, within a couple of hours, a coworker stopped by my office. He said that he'd noticed that I was running an analysis on a SAM file that was taking up a lot of memory and that he could probably show me how to do it faster and more efficiently using a Python module called pysam.

Mind blown.

Some folks will probably read this post and think it's rather quaint that I'd never learned to use pysam. But guess what: I know that I'm far from the only one. Most of the folks I've worked with were happily PERL-ing their way through giant files one line at a time. Had I known about this stuff a few years ago, my productivity would've increased dramatically. Of course, this has led me to wonder what other efficiencies I could add into my workflow.

Since I'm not struggling to juggle the demands of doing a postdoc anymore, I've begun reading books about this stuff.

Since I'm not struggling to juggle the demands of doing a postdoc anymore, I've begun reading books about this stuff.

I've only been 'out of the game' for a few months, and there are already so many things that I've learned that I think should be universally applied to academic labs. So much could be improved by using GitHub to control and share code, setting up Google Docs and Google Drive to track team projects, and writing your computational 'lab book' in Markdown to quickly organize information [2], for example.

Part of bettering one's skills is taking the time (i.e., evenings and weekends) to read guides and practice. It also helps to be surrounded by people with different specialties and skills, as long as those people have the time and inclination to share their skills with others. I'm worried that the rat-race, pedal-to-the-metal mentality of academic life is preventing the transfer of incredibly useful skills and reducing productivity overall [3].

Anyways, I'm hoping to use this blog to share some of the 'skillz' I've acquired with folks who may benefit from them, so stay tuned.


[1] 'Worked' is subjective here. The quality of scientific software is often depressingly poor. For example, I can't count the number of times that coworkers and I have found bugs in scripts and code. I've also frequently seen software made public with incomprehensible, idiosyncratic options and requirements. For instance, requiring that all files be located in the same directory as the program's executable, or even worse, requiring that the user produce a particular filename/directory structure because the program doesn't even allow you to specify the name of input files, let alone their location(s).

[2] For many years now, I've been storing notes with code-snippets, or descriptions of how to use various pieces of scientific software in plain text files. While useful for portability purposes, it becomes unwieldy to organize large text files into sections and clearly indicate what is comment vs. what is code. Of course programmers would've solved this problem: Markdown is a super-streamlined version of HTML that allows you to easily create headings, code blocks, numbered/bulleted lists, embedded links, etc. simply by how text is laid out in a document. I recommend an awesome live Markdown editor called MacDown, which can be used to view/edit Markdown files, or export them to HTML/PDF. I've also started a GitHub repo where I'm going to store all my notes, so you can see what it looks like here.  

[3] I think that it's possible to transfer 'knowledge' in the form of results without transferring the skills required to generate said results. A big difference between academia and industry that I've noticed is that the latter doesn't have a pathological aversion to discussing details. I don't know how many times I've heard academics ask me and others to skip the details on how we were going to do something and focus on what we were going to do. Seems like a great recipe for the prevention of sharing useful skills.

Wasting my time...

One of the most irksome aspects of working in computational biology is how frustrating it can be to analyze other people's data (OPD) [1]. By OPD, I don't mean quickie files generated for personal use; rather, I'm talking about datasets ostensibly provided so that other folks can build upon, or at the very least, replicate published work. I'm talking about anything from supplementary material included with papers and/or software, to big taxpayer funded public databases.

Here's a typical scenario: I need to combine two or more pieces of data, such as a list of human disease associated variants identified in a study with some database of previously published variant associations. Conveniently, both datasets use the same format for identifying variants, which means that this should boil down to finding the union between a particular column in each of the tables. This shouldn't take more than five minutes, right?

Unfortunately, I quickly notice that some proportion of variants aren't being found in the database, even though the referenced origin of said variants are in there. 15 minutes of searching reveals that many of these are just typos, the others I'll have to check in more detail. I decide that I'd better write a script that cross-references the references [2] against the variants to catch any further mistakes, but this ends up spitting out a lot of garbage. Some time later, I realize that one of the tables doesn't stick to a consistent referencing style [3], so I can either go through the column and fix the entries manually, or try to write a script that handles all possibilities. A few hours later, I've finally got the association working, minus a dozen or so oddball cases that I'll have to go through one-by-one, only to find out that much of the numeric data I wanted to extract in the first place is coded as 'free text'. Now I'll need to write more code to extract the values I want. However, it's now 7 pm, and this will have to wait until tomorrow.

I've encountered this sort of problem many, many times when working with scientific data. Why are we so tolerant of poorly formated, error-riden, undocumented datasets? Or, perhaps a more appropriate question is why don't scientists have more respect for each other's time? Is it more reasonable for the dataset generator to spend a little bit of time checking the reliability of their data programmatically or for each person who downloads the data to waste hours (or days) working through typos and errors?

I get it: after spending months writing and rewriting a manuscript, rarely do you feel like spending a lot of time polishing off the supplementary materials. Mistakes happen simply because you're in a rush to get a draft out the door. On the more cynical side, I have also been told that spending time making it easier for people to use my data isn't worth my time. Neither of these considerations explains errors found in public databases, however.

I don't have a solution to the problem, but I'm pretty sure that the root cause is one of incentives: that is to say, there are few professional incentives for making it easier for your colleagues (competitors) to replicate and/or build upon your work. Perhaps we need a culture shift towards teaching better values to students or, more realistically, we need journals to actually required that data follow minimal standards, perhaps including requiring that mistakes in supplementary tables be fixed when pointed out by downstream users. 

[1] Who's down with OPD? Very few folks, I'm afraid.

[2] Cross-referencing has always struck me as the lamest, overused, 'nerd word' on TV. I cross-reference all the time, but I think this is the first time I've actually referred to it as such.

[3] e.g., [First author's first name] YYYY. I wish I was making this up.