Thursday, December 12, 2019

Computing models for a neuroimaging lab

I had a conversation with a colleague recently about how to set up computing for a new neuroimaging lab.  I thought that it might be useful for other new investigators to hear the various models that we discussed and my view of their pros and cons.  My guess is that many of the same issues are relevant for other types of labs outside of neuroimaging as well - let me know in the comments below if you have further thoughts or suggestions!

The simplest model: Personal computers on premise

The simplest model is for each researcher in the lab to have their own workstation (or laptop) on which all of their data live and all of their computing is performed.

Pros:

  • Easy to implement
  • Freedom: Each researcher can do whatever they want (within the bounds of the institution’s IT policies) as they have complete control over their machine. NOTE: I have heard of institutions that do not allow anyone on campus to have administrative rights over their own personal computer.  Before one ever agrees to take a position, I would suggest inquiring about the IT policies and make sure that they don’t prevent this; if they do, then ask the chair to add language to your offer letter than explicitly provides you with an exception to that policy.  Otherwise you will be completely at the mercy of the IT staff — and this kind of power breeds the worst behavior in those staff.  More generally, you should discuss IT issues with people at an institution before accepting any job offer, preferably with current students and postdocs, since they will be more likely to be honest about the challenges.

Cons:

  • Lack of scalability: Once they need to run more jobs than there are cores on the machine, the researcher could end up waiting a very long time for those jobs to complete, and/or crash the machine due to resource insufficiency.  These systems also generally have limited disk space.
  • Underuse: One can of course buy workstations with lots of cores/RAM/storage, which can help address the previous point to some degree.  However, then one is paying a lot of money for resources that will sit underutilized most of the time.
  • Admin issues: Individual researchers are responsible for managing their own systems. This means that each researcher in the lab will likely be using different versions of each software package, unless some kind of standardized container system is implemented.  This also means that each researcher needs to spend their precious time dealing with software installation issues, etc, unless there is a dedicated system admin, which costs $$$$.
  • Risk: The systems used for these kinds of operations are generally commodity-level systems, which are more likely to fail compared to enterprise-level systems (discussed below).  Unless the lab has a strict policy for backup or duplication (e.g. on Dropbox or Box) then it’s almost certain that at some point data will be lost.  There is also a non-zero risk of personal computers being stolen or lost.
Verdict: I don’t think this is generally a good model for any serious lab.  The only strong reason that I could see for having local workstations for data analysis is if one’s analysis requires a substantial amount of graphics-intensive manual interaction.

Virtual machines in the cloud

Under this model, researchers in the lab house their data on a commercial cloud service, and spin up virtual machines on that service as needed for data analysis purposes.

Pros:

  • Flexibility: This model allows the researcher to allocate just enough resources for the job at hand.  For the smallest jobs, one can sometimes get by with the free resources available from these providers (I will use Amazon Web Services[AWS] as an example here since it’s the one I’m most familiar with).  On AWS, one can obtain a free t2.micro instance (with 1 GB RAM and 1 virtual CPU); this will not be enough to do any real analysis, but could be sufficient for many other functions such as working with files.  At the other end, one can also allocate a c5.24xlarge instance with 96 virtual CPUs and 192 GiB of RAM for about $4/hour.  This range of resources should encompass the needs of many labs.  Similarly, on the space side, you can scale your storage space in an effectively unlimited way.
  • Resource-efficiency: You only use what you pay for.
  • Energy-efficiency: Cloud services are thought to be much more energy-efficient compared to on-premise computers, due to their higher degree of utilization (i.e. they are not sitting idle most of the time) and the fact that they often obtain their power from renewable resources.  AWS estimates that cloud computing can reduce carbon emissions by up to 88% compared to on-premise computers.
  • Resilience: Occasionally the hardware on a cloud VM goes out.  When this happens, you simply spin up a new one --- no hardware replacement cost.

Cons:

  • Administration and training: Since most scientists will not have experience spinning up and administering cloud systems, there will be some necessary training to make this work well; preferably, one would have access to a system administrator with cloud experience.  Researchers need to be taught, for example, to shut down expensive instances after using them, lest the costs begin to skyrocket.
  • Costs: Whereas the cost of a physical computer is one-time, cloud computing has ongoing costs.  If one is going to be a serious user of cloud computing, then they will need to deeply understand the cost structure of their cloud computing services.  For example, there are often substantial costs to upload and download data from the cloud, in addition to the costs of the resources themselves.  Cloud users should also implement billing alarms, particularly to catch any cases where credentials are compromised. In one instance in my lab, criminals obtained our credentials (which were accidentally checked into Github) and spent more than $20,000 within about a day; this was subsequently refunded by AWS, but it caused substantial anxiety and extra work.  
  • Scalability: There will be many cases in which an analysis cannot be feasibly run on a single cloud instance in reasonable time (e.g., running fMRIprep on a large dataset).  One can scale beyond single instances, but this requires a substantial amount of work, and is really only feasible if one has a serious cloud engineer involved. It is simply not a good use of a scientist’s time to figure out how to spin up and manage a larger cluster on a cloud service; I know this because I’ve done it, and those are many hours that I will never get back that could have been used to do something more productive (like play guitar, do yoga, or go for a nice long walk).  One could of course spin up many individual instances and manually run jobs across them, but this requires a lot of human effort, and there are better solutions available, as I outline below.

Verdict: For a relatively small lab with limited analysis needs and reasonably strong system administration skills or support, I think this is a good solution.   Be very careful with your credentials!

Server under a desk (SUAD)

Another approach for many labs is a single powerful on-premise server shared by multiple researchers in the lab, usually located in some out-of-the-way location so that no one (hopefully) spills coffee on it or walks away with it.  It will often have a commodity-grade disk array attached to it for storage.

Pros:
  • Flexibilty: As with the on-premise PC model, the administrator has full control.
Cons:
  • Basically all the same cons as the on-premise PC model, with the added con that it's a single point of failure for the entire lab.
  • Same scaling issues as cloud VMs
  • Administration: I know that there are numerous labs where either faculty or graduate students are responsible for server administration.  This is a terrible idea!  Mostly because it's time they could better spend reading, writing, exercising, or simply having a fun conversation over coffee.
Verdict: Don't do it unless you or your grad students really enjoy spending your time diagnosing file system errors and tuning firewall rules.

Cluster in a closet (CIIC)

This is a common model for researchers who have outgrown the single-computer-per-researcher or SUAD model.  It’s the model that we followed when I was a faculty member at UCLA, and that I initially planned to follow when I moved from UCLA to UT Austin in 2009.  The CIIC model generally involves a rack-mounted system with some number of compute nodes and a disk array for storage.  Usually shoved in a closet that is really too small to accommodate it.

Pros:

  • Scalability: CIIC generally allows for much better scalability. With current systems, one can pack more than 1000 compute cores alongside substantial storage within a single full-height rack.  Another big difference that allows much greater scalability is the use of a scheduling (or queueing) system, which allows jobs to be submitted and then run as resources are available.  Thus, one can submit many more jobs than the cluster can handle at any one time, and the scheduler will deal with this gracefully. It also prevents problems that happen often under the SUAD model when multiple users log in and start jobs on the server and overrun its resources.
  • Flexibility: One can configure one’s cluster however they want, because they will have administrative control over the system.

Cons:

  • Administration:Administering a cluster well is a complex job that needs a professional system administrator, not a scientist moonlighting as an sysadmin; again, I know this because I lived it.  In particular, as a cluster gets bigger, the temptation for criminals to compromise it grows as well, and only a professional sysadmin is going to be able to keep up with cybercriminals who break into systems for a living.
  • Infrastructure: Even a reasonably sized cluster requires substantial infrastructure that is unlikely to be met by a random closet in the lab.  The first is power: A substantial cluster will likely need a dedicated power line to supply it.  The second is cooling: Computers generate lots of heat, to a degree that most regular rooms will not be able to handle.  On more than one occasion we had to shut down the cluster at UCLA because of overheating, and this can also impact the life of the computer’s components.  The third is fire suppression: If a fire starts in the closet, you don’t want regular sprinklers dumping a bunch of water on your precious cluster. It is for all of these reasons that many campuses are no longer allowing clusters in campus buildings, instead moving them to custom-built data centers that can address all of these needs.
  • Cost: The cost of purchasing and running a cluster can be high. Commercial-level hardware is expensive, and when things break you have to find money to replace them, because your team and colleagues will have come to rely on them.
  • Training: Once you move to a cluster with more than a single node, you will need to use a scheduler to submit and run jobs. This requires a change in mindset about how to do computing, and some researchers find it annoying at first.  It definitely requires letting go of a certain level of control, which is aversive for many people. 
  • Interactivity: It can be more challenging to do interactive work on a remote cluster than on a local workstation, particularly if it is highly graphics-intensive work.  One usually interacts with these systems using a remote window system (like VNC), and these often don’t perform very well.
Verdict: Unless you have the resources and a good sysadmin, I’d shy way from running your own cluster.  If you are going to do so, locate it in a campus data center rather than in a closet.

High-performance computing centers

When I moved from UCLA to UT Austin in 2009, I had initially planned to set up my own CIIC. However, once I arrived I realized that I had another alternative, which was to instead take advantage of the resources at the Texas Advanced Computing Center, which is the local high-performance computing (HPC) center (that also happens to be world-class).  My lab did all of its fMRI analyses using the TACC systems, and I have never looked back. Since moving to Stanford, we now also take advantage of the cluster at the Stanford Research Computing Facility, while also continuing to use the TACC resources as well.

Pros:

  • Scalability: Depending on the resources available at one’s HPC center, one can often scale well beyond the resources of any individual lab.  For example, on the Frontera cluster at TACC (its newest, currently the 5th most powerful supercomputer on Earth), a user can request up to 512 nodes (28,672 cores) for up to 48 hrs.  That's a lot of Freesurfer runs. The use of scheduling systems also makes the management of large jobs much easier.  These centers also usually make large-scale storage available for a reasonable cost.  
  • Professional management: HPC centers employ professional system administrators whose expertise lies in making these systems work well and fixing them when they break.  And the best part is that you generally don’t have to pay their salary! (At least not directly).

Cons:

  • Training: The efficient usage of HPC resources requires that researchers learn a new model for computing, and a new set of tools required for job submission and management. For individuals with solid UNIX skills this is rarely a problem, but for researchers without those skills it can be a substantial lift.
  • Control: Individual users will not have administrative control (“root”) on HPC systems, which limits the kinds of changes one can make to the system. Conversely, the administrators may decide to make changes that impact one’s research (e.g. software upgrades).  
  • Sharing: Using HPC systems requires good citizenship, since the system is being shared by many users.  Most importantly: Users must *never* run jobs on the login node, as tempting as that might sometimes be.  
  • Waiting: Sometimes the queues will become filled up and one may have to wait a day for one's jobs to run (especially just before the annual Supercomputing conference).  
  • Access:  If one’s institution has an HPC center, then one may have access to those resources.  However, not all such centers are built alike.  I’ve been lucky to work with centers at Texas and Stanford that really want researchers to succeed.  However, I have heard horror stories at other institutions, particularly regarding HPC administrators who see users as an annoyance rather than as customers, or who have a very inflexible approach to system usage that doesn’t accomodate user needs.  For researchers without local HPC access, there may be national resources that one can gain access to, such as the XSEDE network in the US.

Verdict:  For a lab like mine with significant computing needs, I think that HPC is the only way to go, assuming that one has access to a good HPC center.  Once you live through the growing pains, it will free you up to do much larger things and stop worrying about your cluster overheating because an intruder is using it to mine Bitcoin.

These are of course just my opinions, and I'm sure others will disagree.  Please leave your thoughts in the comment section below!


Thursday, June 27, 2019

Why I will be flying less

Since reading David Wallace Wells’ “The Uninhabitable Earth: Life After Warming” earlier this year, followed by some deep discussions on the issue of climate change with my friend and colleague Adam Aron from UCSD,  I no longer feel we can just sit back and hope someone else will fix the problem.  And it’s becoming increasingly clear that if we as individuals want to do something about climate change, changing our travel habits is probably the single most effective action we can take.  Jack Miles made this case in his recent Washington Post article, "For the love of Earth, stop traveling”:

According to former U.N. climate chief Christiana Figueres, we have only three years left in which to “bend the emissions curve downward” and forestall a terrifying cascade of climate-related catastrophes, much worse than what we’re already experiencing. Realistically, is there anything that you or I can do as individuals to make a significant difference in the short time remaining?
The answer is yes, and the good news is it won’t cost us a penny. It will actually save us money, and we won’t have to leave home to do it. Staying home, in fact, is the essence of making a big difference in a big hurry. That’s because nothing that we do pumps carbon dioxide into the atmosphere faster than air travel. Cancel a couple long flights, and you can halve your carbon footprint. Schedule a couple, and you can double or triple it.

I travel a lot - I have almost 1.3 million lifetime miles on United Airlines, and in the last few years have regularly flown over 100,000 miles per year.  This travel has definitely helped advance my scientific career, and has been in many ways deeply fulfilling and enlightening.  However, the toll has been weighing on me and Miles' article really pushed me over the edge towards action.  I used the Myclimate.org carbon footprint calculator to compute the environmental impact of my flights just for the first half of 2019, and it was mind-boggling: more than 23 tons of CO2.  For comparison, my entire household’s yearly carbon footprint (estimated using https://www3.epa.gov/carbon-footprint-calculator/) is just over 10 tons!  

For these reasons, I am committing to eliminate (to the greatest degree possible) academic air travel for the foreseeable future. That means no air travel for talks, conferences, or meetings -- instead participating by telepresence whenever possible.  I am in a fortunate position, as a tenured faculty member who is already well connected within my field.  By taking this action, I hope to help offset the travel impact of early career researchers and researchers from the developing world for whom air travel remains essential in order to get their research known and meet fellow researchers in their field. I wish that there was a better way to help early career researchers network without air travel, but I just haven’t seen anything that works well without in-person contact.  Hopefully the growing concern about conference travel will also help spur the development of more effective tools for virtual meetings. 


Other senior researchers who agree should join me in taking the No Fly pledge at https://noflyclimatesci.org/.  You can also learn more here: https://academicflyingblog.wordpress.com/


Monday, December 3, 2018

Productivity stack for 2019


Apparently some people seem to think my level of productivity is simply not humanly possible: 






















For the record, there is no cloning lab in my basement.  I attribute my productivity largely to a combination of mild obsessive/compulsive tendencies and a solid set of tools that help me keep from feeling overwhelmed when the to-do list gets too long.  I can’t tell you how to become optimally obsessive, but I can tell you about my productivity stack, which I hope will be helpful for some of you who are feeling increasingly overwhelmed as you gain responsibilities. 

Platform: MacBook Pro 13” + Mac OS X 
  • I have flirted with leaving the Mac as the OS has gotten increasingly annoying and the hardware increasingly crappy, but my month-long trial period with a Windows machine left me running back to the Mac (mostly because the trackpad behavior on the Dell XPS13 was so bad).  Despite the terrible keyboard (I’ve had two of them replaced so far) and the lack of a physical escape key, the 13” Macbook Pro is a very good machine for working on the road - it’s really light and the battery life is good enough that I rarely have to plug in, even on a long flight from SFO to the east coast.  In the old days I would invert my screen colors to reduce power usage, but now I just use Dark Mode in the latest Mac OSX.
  • I keep a hot spare laptop (a previous-generation Macbook Pro) synced to all of my file sharing platforms (Dropbox, Box, and Google Drive) in case my primary machine were to die suddenly.  Which has happened. And will happen again. If you can afford to have a second laptop I would strongly suggest keeping a hot spare in the wings.
  • I don’t have a separate desktop system in my office - when I’m there I just plug into a larger monitor and work from my laptop. In the past I had separate desktop and laptop systems but just found it too easy for things to get desynchronized.
  • Pro Tip: About once a month I run the Onyx maintenance scripts, run the DiskUtility file system repair, and clone my entire system to a lightweight 1TB external drive (encrypted, of course) using CarbonCopyCloner.  Having a full disk backup in my backpack has saved me on a few occasions when things went wrong while traveling.

Mobile: Pixel 2 + Google Fi 
  • I left the iPhone more than a year ago now and have not looked back.  The Pixel 2 is great and Google Fi wireless service is awesome, particularly if you travel a lot internationally, since data costs the same almost everywhere on earth.  If you want to sign up, use my referral link and you’ll get a $20 credit (full disclosure - I will get a $100 credit).

Email: Gmail  
  • For many years I used the Mac Mail.app client, but as it became increasingly crappy I finally gave up and moved to the GMail web client, which has been great.  The segregation of promotion and social emails, and new features like nudges, make it a really awesome mail client.  
  • My email workflow is a lazy adaptation of the GTD system: I try not to leave anything in my inbox for more than a day or so (unless I’m traveling).  I either act on it immediately, decide to ignore/delete it, or put it straight into my todo list (and archive the message so it’s no longer in my inbox).  I’m rarely at inbox zero, but I usually manage to keep it at 25 or fewer messages, so I can see it all in a single screen.

To do list: Todoist 
  • I moved to Todoist a couple of years ago and have been very happy with it. It’s as simple as it needs to be, and no simpler.  The integration with GMail is particularly nice.
Calendar: Google Calendar
  • The integration between my Android device and Gmail across platforms makes this a no-brainer.


Notes: Evernote 
  • Evernote is my go-to for note-taking during meetings, talks, and whenever I just want to write something down.  

Lab messaging: Slack 
  • I really don’t love Slack (because I feel that it’s too easy for things to get lost when a channel is busy), but it has become our lab’s main platform for messaging.   We've tried alternatives but they have never really stuck.

Safe Surfing: Private Internet Access VPN + UBlock Origin/Privacy Badger 
  • Whenever I’m on a public network I stay safe by using the Private Internet Access VPN, which works really well across every platform I have tested it.   (and you can pay for it with Bitcoin!)
  • When surfing in Chrome I use UBlock Origin and Privacy Badger extensions to prevent trackers. 

Writing: Google Docs/TexShop 
  • For collaborative writing we generally stick to Google Docs, which just works.  Paperpile is a very effective reference management system.  
  • For my own longer projects (like books) I write in LaTeX using TexShop, with BibDesk for bibliography management, via the MacTex distribution.  If I were writing a dissertation today I would definitely use LaTeX, as I have seen too many students scramble as Microsoft Word screwed up their huge dissertation file.  Some folks in the lab use Overleaf, which I like, but I also do a lot of writing while offline so a web-based solution isn’t optimal for me.

Presentations: Keynote 
  • I have tried at various points to leave Keynote, but always came crawling back.  It’s just so easy to create great-looking presentations, and as cool as it would be to build them in LaTeX, I would have nightmares involving the inability to compile my presentation 3 minutes before my talk.

Art: Affinity Designer 
  • I gave up on Adobe products when they moved to a subscription model.  For vector art, I really like Affinity Designer, though it does have a pretty substantial learning curve.  I've tried various freeware alternatives but none of them work very well.

Coding in R: Rstudio 
  • If you’ve read my statistics book you know that I have a love/hate relationship with R, and most of the love comes from RStudio, which is an excellent IDE.  Except for code meant to run on the supercomputer, I write nearly all of my R code in RMarkdown Notebooks, which are the best solution for literate programming that I have seen.

Coding in Python: Anaconda + Jupyter Lab/Atom 
  • Python is my language of choice for most coding problems, and Anaconda has pretty much everything I need for scientific Python.
  • For interactive coding (e.g. for teaching or exploration) I use Jupyter Lab, which has matured nicely.  
  • For non-interactive coding (e.g. for code that will run on the supercomputer) I generally use Atom which is nice and simple but gets the job done.

Hopefully these tips are helpful - now back to getting some real work done! 

Tuesday, November 27, 2018

Automated web site generation using Bookdown, CircleCI, and Github

For my new open statistics book (Statistical Thinking for the 21st Century), I used Bookdown which is a great tool for writing a book using RMarkdown.  However, as the book came together, the time to build the book grew to more than 10 mins due to the many simulations and Bayesian model estimation.  And since each output type (of which there are currently three: Gitbook, PDF, and EPUB) requires a separate build run, rebuilding the full book distribution became quite an undertaking.  For this reason, I decided to implement an automated solution using the CircleCI continuous integration service. We already use this service for many of the software development projects in our lab (such as fMRIPrep and  MRIQC), so it was a natural choice for this project as well.

The use of CircleCI for this project is made particularly easy by the fact that both the book source and the web site for the book are hosted on Github — the ability to set up hooks between Github and CircleCI allows two important features. First, it allows us to automatically trigger a rebuild of the site whenever there is a new push to the source repo.  Second, it allows CircleCI to push a new copy of the book files to the separate repo that the site is served from.

Here are the steps to setting this up - see the Makefile and CircleCI config.yml file in the repo for questions.  And if you come across anything that I missed please leave a comment below!

  1. Create a CircleCI account linked to the relevant GitHub account.
  2. Add the source repo to CircleCI.
  3. Create the CircleCI config.yml file.  Here is the content of my config file, with comments added to explain each step:

version: 2
jobs:
  build:
    docker:
# this is my custom Docker image
      - image: poldrack/statsthinking21

CircleCI spins up a VM specified by a Docker image, to which we can then add any necessary additional software pieces.  I initially started with an image with R and the tidyverse preinstalled (https://hub.docker.com/r/rocker/tidyverse/) but installing all of the R packages as well as the TeX distribution needed to compile the PDF took a very long time, quickly using up the 1,000 build minutes per month that come with the CircleCI free plan.  In order to save this time I build a custom Docker container (Dockerfile) that incorporates all of the dependencies needed to build the book; this way, CircleCI can simply pull the container from my DockerHub repo and run it straight away rather than having to build a bunch of R packages.   

    steps:
      - add_ssh_keys:
          fingerprints:
            - "73:90:5e:75:b6:2c:3c:a3:46:51:4a:09:ac:d9:84:0f”

In order to be able to push to a github repo, CircleCI needs a way to authenticate itself.  A relatively easy way to do this is to generate an SSH key and install the public key portion as a “deploy key” on the Github repo, then install the private key as an SSH key on CircleCI.  I had problems with this until I realized that it requires a very specific type of SSH key (a PEM key using RSA encryption), which I generated on my Mac using the following command:

ssh-keygen -m PEM -t rsa -C "poldrack@gmail.com


# check out the repo to the VM - it also becomes the working directory
      - checkout
# I forgot to install ssh in the docker image, so install it here as we will need it for the github push below
      - run: apt-get install -y ssh
# now run all of the rendering commands
      - run:
           name: rendering pdf
           command: |
             make render-pdf
      - run:
           name: rendering epub
           command: |
             make render-epub
      - run:
           name: rendering gitbook
           command: |
             make render-gitbook

The Makefile in the source repo contains the commands to render the book in each of the three formats that we distribute: Gitbook, PDF, and EPUB.  Here we build each of those.

# push the rendered site files to its repo on github
      - run:
           name: check out site repo
           command: |
             cd /tmp
             ssh-keyscan github.com >> ~/.ssh/known_hosts

The ssh-keyscan command is necessary in order to allow headless operation of the ssh command necessary to access github below.  Otherwise the git clone command will sit and wait at the host authentication prompt for a keypress that will never come.

# clone the site repo into a separate directory
             git clone git@github.com:psych10/thinkstats.git
             cd thinkstats
# copy all of the site files into the site repo directory
             cp -r ~/project/_book/* .
             git add .
# necessary config to push
             git config --global user.email poldrack@gmail.com             git config --global user.name "Russ Poldrack"
             git commit -m"automated update"
             git push origin master

That’s it! CircleCI should now build and deploy the book any time there is a new push to the repo.  Don’t forget to add a CircleCI badge to the README to show off your work!   

Tuesday, November 20, 2018

Statistical Thinking for the 21st Century - a new intro statistics book

I have published an online draft of my new introductory statistics book, titled "Statistical Thinking for the 21st Century", at http://thinkstats.org.  This book was written for my undergraduate statistics course at Stanford, which I started teaching last year.  The first time around I used Andy Field's An Adventure in Statistics, which I really like but most of my students disliked because the statistical content was buried within a lot of story.  In addition, there are a number of topics (overfitting, cross-validation, reproducibility) that I wanted to cover in the course but weren't covered deeply in the book.  So I decided to write my own, basically transcribing my lecture slides into a set of RMarkdown notebooks and generating a book using Bookdown.

There are certainly many claims in the book that are debatable, and almost certainly things that I have gotten wrong as well, given that I Am Not A Statistician.  If you have the time and energy, I'd love to hear your thoughts/suggestions/corrections - either by emailing me, or by posting issues at the github repo. 

I am currently looking into options for publishing this book in low-cost paper form - if you would be interested in using such a book for a course you teach, please let me know.  Either way, the electronic version will remain freely available online.


Tuesday, April 17, 2018

How can one do reproducible science with limited resources?

When I visit other universities to talk, we often end up having free-form discussions about reproducibility at some point during the visit.  During a recent such discussion, one of the students raised a question that regularly comes up in various guises. Imagine you are a graduate student who desperately wants to do fMRI research, but your mentor doesn’t have a large grant to support your study.  You cobble together funds to collect a dataset of 20 subjects performing your new cognitive task, and you wish to identify the whole-brain activity pattern associated with the task. Then you happen to read "Scanning the Horizon” which points out that a study with only 20 subjects is not even sufficiently powered to find the activation expected from a coarse comparison of motor activity to rest, much less to find the subtle signature of a complex cognitive process.  What are you to do?
In these discussions, I often make a point that is statistically correct but personally painful to our hypothetical student:  The likelihood of such a study identifying a true positive result if it exists is very low, and the likelihood of any positive results being false is high (as outlined by Button et al, 2013), even if the study was fully pre-registered and there is no p-hacking.  In the language of clinical trials, this study is futile, in the sense that it is highly unlikely to achieve its aims. In fact, such a study is arguably unethical, since the (however miniscule) risks of participating in the study are not offset by any potential benefit to the subject or to society.  This raises a dilemma: How are students with limited access to research funding supposed to gain experience in an expensive area of research and test their ideas against nature?

I have struggled with how to answer these questions over the last few years.  I certainly wouldn't want to suggest that only students from well-funded labs or institutions should be able to do the science that they want to do.  But at the same time, giving students a pass on futile studies will have dangerous influence, since many of those studies will be submitted for publication and will thus increase the number of false reports (positive or negative) in the literature.  As Tal Yarkoni said in his outstanding “Big Correlations in Little Studies” paper:
Consistently running studies that are closer to 0% power than to 80% power is a sure way to ensure a perpetual state of mixed findings and replication failures.
Thus, I don’t think that the answer is to say that it’s OK to run underpowered studies.  In thinking about this issue, I’ve come up with a few possible ways to address the challenge.

1) "if you can’t answer the question you love, love the question you can"

In an outstanding reflection published last year in the Journal of Neuroscience, Nancy Kanwisher said the following in the context of her early work on face perception:
I had never worked on face perception because I considered it to be a special case, less important than the general case of object perception. But I needed to stop messing around and discover something, so I cultivated an interest in faces. To paraphrase Stephen Stills, if you can’t answer the question you love, love the question you can.
In the case of fMRI, one way to find a question that you can answer is to look at shared datasets.  There is now a huge variety of shared data available from resources including OpenfMRI/OpenNeuro, FCP/INDI, ADNI, the Human Connectome Project, and OASIS, just to name a few. If  a relevant dataset is not available openly but you know of a paper where someone has reported such a dataset, you can also contact those authors and ask whether they would be willing to share their data (often with an agreement of coauthorship). An example of this from our lab is a recent paper by Mac Shine (published in Network Neuroscience), in which he contacted the authors of two separate papers with relevant datasets and asked them to share the data. Both agreed, and the results came together into a nice package.  These were pharmacological fMRI studies that would not have even been possible within my lab, so the sharing of data really did open up a new horizon for us.

Another alternative is to do a meta-analysis, either based on data available from sites like Neurosynth or Neurovault, or by requesting data directly from researchers.  As an example, a student in one of my graduate classes did a final project in which he requested the data underlying meta-analyses published by two other groups, and then combined these to perform a composite meta-analysis, which was ultimately published.  

2) Focus on cognitive psychology and/or computational models for now

One of my laments regarding the training of cognitive neuroscientists in today’s climate is that their training is generally tilted much more strongly towards the neuroscience side (and particularly focused on neuroimaging methods), at the expense of training in good old fashioned cognitive psychology.  As should be clear from many of my writings, I think that a solid training in cognitive psychology is essential in order to do good cognitive neuroscience; certainly just as important as knowing how to properly analyze fMRI data. Increasingly, this means thinking about computational models for cognitive processes.  Spending your graduate years focusing on designing cognitive studies and building computational models of them will put you in an outstanding position to get a good postdoc in a neuroimaging lab that has the resources to support the kind of larger neuroimaging studies that are now required for reproducibility. I’ve had a couple of people from pure cognitive psychology backgrounds enter my lab as postdocs, and their NIH fellowship applications were both funded on the first try, because the case for additional training in neuroscience was so clear.  Once you become skilled at cognition and (especially) computation, imaging researchers will be chomping at the bit to work with you (I know I would!). In the meantime you can also start to develop chops at neuroimaging analysis using shared data as outlined in #1 above.

3) Team up

The field of genetics went through a similar reckoning with underpowered studies more than a decade ago, and the standard in that field is now for large genome-wide association studies which often include tens of thousands of subjects.  They also usually include tens of authors on each paper, because amassing such large samples requires more resources than any one lab can possess. This strategy has started to appear in neuroimaging through the ENIGMA consortium, which has brought together data from many different imaging labs to do imaging genetics analyses.  If there are other labs working on similar problems, see if you can team up with them to run a larger study; you will likely have to make compromises, but a reproducible study is worth it (cf. #1 above).

4) Think like a visual neuroscientist

This one won’t work for every question, but in some cases it’s possible to focus your investigation on a much smaller number of individuals who are characterized much more thoroughly; instead of collecting an hour of data each on 20 people, collect 4 hours of data per person on 5 people.  This is the standard approach in visual neuroscience, where studies will often have just a few subjects who have been studied in great detail, sometimes with many hours of scanning per individual (e.g. see any of the recent papers from Jack Gallant’s lab for examples of this strategy). Under this strategy you don’t use standard group statistics, but instead present the detailed results from each individual; if they are consistent enough across the individuals then this might be enough to convince reviewers, though the farther you get from basic sensory/motor systems (where the variance between individuals is expected to be relatively low) the harder it will be to convince them.  It is essential to keep in mind that this kind of analysis does not allow one to generalize beyond the sample of individuals who were included in the study, so any resulting papers will be necessarily limited in the conclusions they can draw.

5) Carpe noctem

At some imaging centers, the scanning rates become drastically lower during off hours, such that the funds that would buy 20 hours of scanning during prime time might stretch to buy 50 or more hours late at night.  A well known case is the Midnight Scan Club at Washington University, which famously used cheap late night scan time to characterize the brains of ten individuals in detail. Of course, scanning in the middle of the night raises all sorts of potential issues about sleepiness in the scanner (as well in the control room), so it shouldn’t be undertaken without thoroughly thinking through how to address those issues, but it has been a way that some labs have been able to stretch thin resources much further.  I don’t want this to be taken as a suggestion that students be forced to work both day and night; scanning into the wee hours should never be forced upon a student who doesn’t want to do it, and the rest of their work schedule should be reorganized so that they are not literally working day and night.

I hope these ideas are useful - If you have other ideas, please leave them in the comments section below!

(PS: Thanks to Pat Bissett and Chris Gorgolewski for helpful comments on a draft of this piece!)



Sunday, March 25, 2018

To Code or Not to Code (in intro statistics)?

Last week we wrapped Stats 60/Psych 10, which was the first time I have ever taught such a course.  One of the goals of the course was for the students to develop enough data analysis skill in R to be able to go off and do their own analyses, and it seems that we were fairly successful in this.  To quantify our performance I used data from an entrance survey (which asked about previous programming experience) and an exit survey (which asked about self-rated R skill on a 1-7 scale).  Here are the data from the exit survey, separated by whether the students had any previous programming experience:
This shows us that there are now about fifty Stanford undergrads who had never programmed before and who now feel that they have at least moderate R ability (3 or above).  Some comments on the survey question "What were your favorite aspects of the course?" also reflected this (these are all from people who had never programmed before):

  • The emphasis on learning R was valuable because I feel that I've gained an important skill that will be useful for the rest of my college career.
  • I feel like I learned a valuable skill on how to use R
  • Gradually learning and understanding coding syntax in R
  • Finally getting code right in R is a very rewarding feeling
  • Sense of accomplishment I got from understanding the R material on my own
At the same time, there was a substantial contingent of the class that did not like the coding component.  This was evident to some comments on the survey question "What were your least favorite aspects of the course?":
  • R coding. It is super difficult to learn as a person with very little coding background, and made this class feel like it was mostly about figuring out code rather than about absorbing and learning to apply statistics.
  • My feelings are torn on R. I understand that it's a useful skill & plan to continue learning it after the course (yay DataCamp), but I also found it extremely frustrating & wouldn't have sought it out to learn on my own.
  • I had never coded before, nor have I ever taken a statistics course. For me, trying to learn these concepts together was difficult. I felt like I went into office hours for help on coding, rather than statistical concepts.
One of the major challenges of the quarter system is that we only have 10 weeks to cover a substantial amount of material, which has left me asking myself whether it is worth it to teach students to analyze data in R, or whether I should instead use one of the newer open-source graphical statsitics packages, such as JASP or Jamovi.  The main pro that I see of moving to a graphical package are that the students could spend more time focusing on statistical concepts, and less time trying to understand R programming constructs like pipes and ggplot aesthetics that have little to do with statistics per se.   On the other hand, there are the several reasons that I decided to teach the course using R in the first place:
  • Many of the students in the class come from humanities departments where they would likely never have a chance to learn coding.  I consider computational literacy (including coding) to be essential for any student today (regardless of whether they are from sciences or the humanities), and this course provides those students with a chance to acquire at least a bit of skill and hopefully inspires curiosity to learn more.
  • Analyzing data by pointing and clicking is inherently non-reproducible, and one of the important aspects of the course was to focus the students on the importance of reproducible research practices (e.g. by having them submit RMarkdown notebooks for the problem sets and final project). 
  • A big part of working with real data is wrangling the data into a form where the statistics can actually be applied.  Without the ability to code, this becomes much more difficult.
  • The course focuses a lot on simulation and randomization, and I'm not sure that the interactive packages will be useful for instilling these concepts.
I'm interested to hear your thoughts about this tradeoff: Is it better for the students to walk away with some R skill but less conceptual statistical knowledge, or greater conceptual knowledge without the ability to implement it in code?  Please leave your thoughts in the comments below.