Are NSF SBIR Reviewers Unqualified?
This post contends that the National Science Foundation reviewer selection process doesn’t comply with NSF policy recommending and NSF solicitations requiring that reviewers have expertise in the fields of proposals they review.
Furthermore, this post contends that some NSF SBIR reviewers don’t know the rules of the SBIR program and that some reviewers don’t read proposals beyond the first paragraph.
What the Rules Say
NSF SBIR solicitations require that “All proposals are carefully reviewed by…three to ten…persons…who are experts in the particular fields represented by the proposal.”
See also Proposal & Award Policies & Procedures, Chapter III — NSF Proposal Processing and Review, B. Selection of Reviewers.
How the NSF Actually Reviews Proposals
The SBIR review process begins with a program director gathering approximately eight proposals submitted in a single subtopic, for example, Agricultural Technology. One proposal might be about soil, another about plants, another about weeds, another about fertilizer effluent washing into rivers, etc.
The program director uses the key words from this collection of proposals to search a database of reviewers. The key words are in Project Summary in the Overview section. Select your key words carefully. Don’t use too broad terms or you will allow unqualified reviewers to review your proposal. Panel key words are not sent to Principal Investigators.
I examined this NSF reviewer database. The reviewers are almost entirely from academia. I saw a retired university president and an Amway salesman (not the same person).
The SBIR program is for small businesses to conduct scientific research that leads to commercial products. The program is not intended for university research. The SBIR program has been captured by universities, which are among our largest corporations. SBIR reviewers should be from industry. The ideal reviewers would be potential customers for your innovation. Your reviewers won’t be ideal. They likely won’t even be adequate.
A dozen or so reviewers are contacted and invited to join a panel. Typically, 25–30% of potential reviewers contacted agree to join the panel. Panels usually consist of three or four reviewers.
Reviewers are asked to rate their “comfort level” of expertise with each proposal, on a four-point scale. Program directors try to convene panels with many reviewers who are “comfortable” with many of the proposals. However, constraints sometimes make this impossible.
The “comfort levels” are not shown to the Principal Investigator.
Verbatim copies of reviews, excluding the names of the reviewers or any reviewer-identifying information, are sent to the Principal Investigator…
— Small Business Innovation and Research (SBIR) Program Solicitation NSF 23–515, VI. NSF Proposal Processing and Review Procedures
“Verbatim” means “the exact words” or “word-for-word.” “Comfort levels” of expertise are part of the reviews, not “reviewer-identifying information.” You are entitled to see the reviewers’ “comfort levels,” and the panel key words, but the NSF won’t share this information with you.
The NSF has never investigated whether there’s a correlation between “comfort levels” and scores. If reviewers with little expertise in the fields of proposals give lower scores, and experts give higher scores, then the playing field isn’t level.
If a key word search fails to find qualified reviewers for a proposal, the program director falls back on the subtopic to group the proposal with other proposals in the subtopic. The program directors rarely instead fall back on the suggested reviewers. More on this, below.
The panels meet for a day to discuss the proposals. Some panels meet remotely while others meet in-person. Remote panelists earn $200 per day. In-person, local panelists earn $280 per day. In-person panelists who must travel earn $480 per day plus travel expenses. This pay is recognized to be below competitive rates, i.e., reviewers are expected to work altruistically.
Panelists are expected to have general or “conversational” knowledge of the fields of the proposals or the subtopic. Deep expertise in any field is not required. A potential reviewer who has deep knowledge of one field of one proposal but lacks general knowledge of the fields of all the proposals will not be invited.
This panel design is used to encourage panelists to talk to each other. If one panelist has a question, another panelist may be able to answer the question. It is not unusual for reviewers’ minds to be changed during these discussions.
The panelists write individual reviews of each proposal and additionally a group review is written.
There is no guaranty that anyone on a panel will have expertise in the particular fields of any proposal. In other words, the NSF does not follow its own rules in choosing reviewers.
In addition to review panels, the NSF also uses “ad hoc” reviewers. These are individual reviewers who work alone and are selected for having deep knowledge of a proposal’s fields.
The NSF has no appeals process when an applicant suspects something is wrong with a proposal’s reviews.
Reviewer Expertise is Private Information
NSF staff cite the Privacy Act and Henke v. Department of Commerce et al., 83 F.3d 1445, to justify not revealing reviewers’ expertise to applicants. The NSF argues that, while expertise isn’t discussed in Henke, by extension expertise cannot be revealed because expertise might identify a reviewer.
The Privacy Act, 5 U.S.C. § 552a(b), lists twelve conditions in which the federal government may disclose private information about individuals. The eleventh condition, 5 U.S.C. § 552a(b)(11), is that a judge may issue a court order to a federal agency to reveal private information.
What’s Wrong With NSF’s Review Process
- The reviewers in the database are unqualified for the SBIR program. The reviewers are academics who lack commercial expertise. Some of the reviewers are retired. Some reviewers have no apparent scientific expertise, e.g., the Amway salesman.
- The solicitations are unclear what the key words section is used for. Are the key words used to search the database of reviewers?
- Project Descriptions are too long, at fifteen pages (about 6500 words, or twenty normally formatted pages). In contrast, the NIH Research Strategy is six pages (3300 words, or about ten normally formatted pages). The DoEd Project Narrative is ten pages (4750 words). The NSF Project Description has redundant questions, in an order that’s hard to follow. Reviewers can’t read eight Project Descriptions in eight hours and retain what they read.
- Reviewers are overworked and underpaid. Reviewers are expected to read eight proposals in one day. Each proposal is about fifty pages, including the essential Project Description (fifteen pages). Reviewers are paid $25–35 per proposal. This is for $275,000 grants, or the reviewers are paid 0.001% (1/10,000) of the value of the decisions they’re making. Suppose there was a box you could check that would pay the reviewers $250, get two or three hours of their time, and your grant would be reduced to $274,000. Would you check the box?
- Falling back onto the subtopic when no qualified reviewers are found for your proposal is non-compliant with solicitations and NSF policy. Reviewers are required to have “expertise in the particular fields of the proposal,” not “conversational knowledge.”
- Reviewers aren’t told to refuse to review proposals that they’re not qualified to review.
- “Comfort levels” and the panel key words are not provided to the Principal Investigator. I’ve repeatedly asked and was repeatedly refused. NSF policy states that Principal Investigators are to receive the “verbatim” reviews, that is, every word of the reviews, except for information that could identify individual reviewers. (“Comfort levels” and panel key words can’t identify reviewers.)
- Suggested reviewers are rarely consulted.
- No links to videos or apps are allowed. Reviewers are expected to read a description of an app when ten minutes using the app would be more informative. A two-minute video can show what an innovation will do better than pages of descriptions.
“No Pilot Study Data”
One of the most frequent reasons my proposals have been rejected is “No pilot study data proving that the innovation is effective.”
The NSF doesn’t require pilot study data. Reviewers make up that rule.
Department of Education SBIR solicitations require that a prototype has been developed and pilot study data has been collected. The NSF has no such requirement.
“Incremental Modification of Established Products”
This is a real rule.
Project Activities Not Responsive to the Solicitation.
- Evolutionary development or incremental modification of established products or proven concepts;
- Straightforward engineering efforts with little technical risk;
If you developed a prototype or minimum viable product (MVP), you may get rejected for having an “established product.” But that’s not what the solicitation says:
NSF SBIR/STTR proposals are often evaluated via the concepts of Technical Risk and Technological Innovation. Technical Risk assumes that the possibility of technical failure exists for an envisioned product, service, or solution to be successfully developed. This risk is present even to those suitably skilled in the art of the component, subsystem, method, technique, tool, or algorithm in question. Technological Innovation indicates that the new product or service is differentiated from current products or services; that is, the new technology holds the potential to result in a product or service with a substantial and durable
advantage over competing solutions on the market. It also generally provides a barrier to entry for competitors. This means that if the new product, service, or solution is successfully realized and brought to the market, it should be difficult for a well-qualified, competing firm to reverse-engineer or otherwise neutralize the competitive advantage generated by leveraging fundamental science or engineering research techniques.
The NSF can fund established products if the proposed innovations carry “technical risk” or represent “technological innovation.”
The problem is that “technical risk” and “technological innovation” are not obvious to a person who knows little or nothing about your field. To prevent rejection on these grounds, for each of your Technical Objectives and Challenges, write a sentence or two starting with “The technical risk is…” and another sentence of two starting with “The technological innovation is…”
These paragraphs go into your Project Pitch in the Technical Objectives and Challenges and in the Project Description in the Intellectual Merits section under the subsection “Describe the key objectives to be accomplished…” You can use the same paragraphs in both your Project Pitch and your Project Description.
Looking back on the work you’ve done can’t disqualify your proposal. Reviewers must look forward to the work you propose to do.
Spot the Mystery Rule
Flag reviews that mark you down for non-existent rules.
My most recent proposal was marked down because my budget justification failed to justify the 50% indirect costs and the 7% small business fee. The solicitations explicitly state that no justification is needed for these items.
Email your Program Director and politely ask for an explanation of “spot the mystery rule” reviews. In my experience, Program Directors will not respond to such a request. If they did, and they admitted that your proposal was marked down due to a non-existent rule, then you would have admission of non-compliance with solicitations and policies.
Reviews Show Reviewers’ Expertise, or Lack Thereof
Your reviews may be prima facie evidence that your reviewers aren’t qualified.
- TLDR. Some reviews show no evidence that the reviewer read the proposal beyond the first paragraph. This is often combined with “spot the mystery rule,” i.e., the reviewer read the first paragraph, spotted a mystery rule, and rejected the proposal without reading further.
- Expertise expired. These reviews show knowledge of a field of a proposal, circa 1973. If a reviewer is not current in a field, they’re not an expert.
- Special ignorance of the science and engineering subfields. These reviews contain gems belying ignorance of the fields of the proposals. Examples from my reviews include that memorizing useful phrases is an effective way to become fluent in a language, that Chinese is written with an alphabet, that Arabic is written with pictures, and that AI translators will make learning second languages obsolete.
- Confusing prior research with your proposal. Several reviews confused summaries of previous research in the field with what I intended to do. E.g., my proposals discuss brain imaging research. Several reviewers thought that I intended to build an app that would scan users’ brains.
I’ve never seen a review discuss my commercialization plans. This omission shows that the reviewers lack expertise in small business management and that the NSF database of reviewers is not appropriate for the SBIR program.
TLDR, Again
I often see reviews say that something wasn’t addressed in the proposal, when a paragraph in the proposal addressed that issue. Reading eight proposals in a day is beyond human capabilities. You can’t blame the reviewers but you can blame the review process. Email your Program Director pointing out each tl;dr example.
Are Scores Random?
If you’ve submitted your proposal more than once, you should see the scores go up. Email the Program Director and ask why your scores went down. Summarize how your recent proposal was better than your previous proposal, e.g., you hired a grant writer to help you, or you addressed a concern expressed in a review of the previous proposal.
A substantial drop in scores is evidence that the reviewers of the recent proposal were not qualified.
Suggested Reviewers
Solicitations include a section for suggested reviewers outside the NSF. A Program Director told me that suggested reviewers are rarely consulted.
I sent an email to Section Head (the Program Directors’ supervisor). I suggested falling back on the suggested reviewers instead of the subtopic when a Program Director is unable to find qualified reviewers. This would ensure that every proposal is reviewed by experts in the particular fields of the proposal. In contrast, falling back to the subtopic finds reviewers who have only limited expertise (“conversational knowledge,” to use a Program Director’s phrase) or no expertise at all.
I suggested making a Suggested Reviewers form with boxes to check whether each suggested reviewer is a potential grant provider, investor, or customer. Such a review would be worth one hundred reviews from academics.
It is difficult for a small, unknown startup to connect with large organizations. I presume that a call from the NSF is more likely to be returned, especially if the call is to a federal agency or to an organization such as Khan Academy that is interested in science. A call from a Program Director could “open doors” for the applicant, leading to a first customer. Even a declined proposal could benefit an applicant.
The solicitations state, “Reviewers who have significant personal or professional relationships with the proposing small business or its personnel will should generally not be included.” It’s easy enough to insure that this rule is followed. Instead of submitting the names of specific suggested reviewers, submit an organization where qualified reviewers could be found. To make it easy for the Program Director, provide the contact info for an administrator who can find a qualified reviewer in their organizaion.
The Section Head did not respond to my suggestions.
The NIH doesn’t allow suggested reviewers for SBIR proposals.
What the NSF Should Do To Improve Reviews
- SBIR reviewers should be from industry, not academia. The Department of Education SBIR instructions say, “Although letters from university professors or individual educators often speak to the significance of an approach, these writers often lack experience with or a connection to the commercialization process and as a result, such letters often do not provide a viable plan or establish that pathways toward commercialization are available on a wide enough scale.”
- The Suggested Reviewers form should have checkboxes for reviewers who are potential funders or customers. Encourage Principal Investigators to suggest organizations where qualified reviewers can be found, not individual names.
- The key words section should be used to select reviewers, and this should be stated in the solicitations.
- Project Descriptions should be shorter. Set a word limit, not a page limit. Formatting rules should follow proven readability guidelines.
- Make a checkbox that Principal Investigators can choose to reduce their budget by $1000 (e.g., from $275,000 to $274,000) and in return the Program Directors spend more time finding qualified reviewers, and the reviewers spend more time reviewing the proposal.
- When no qualified reviewers can be found, don’t fall back on the subtopic. Find qualified reviewers.
- Train reviewers to refuse to review proposals that they’re not qualified to review.
- Release the reviewer expertise “comfort levels” and the panel key words to the Principal Investigator.
- Allow links to videos, websites, and apps.
- Allow Principal Investigators to record their Project Descriptions, like audiobooks. Reviewers could listen to the Project Descriptions while walking their dogs and then be better prepared when they read the proposals.
Other SBIR Agencies
The Department of Education (DoEd) promises that proposals will be reviewed by “research scientists and education technology experts from the agency or other federal agencies.” Lists of suggested reviewers are not solicited. The DoEd funds only seventeen SBIR proposals annually. (NSF funds about four hundred annually.) Until recently, DoEd accepted proposals only for instruments for classroom use that had pilot study data. With this narrow scope they needed only limited range of expertise. The DoEd has opened up their SBIR program to wider topics. We’ll see if they can maintain the quality of reviews, or if they’ll need to ask proposers for suggested reviewers.
The National Institutes of Health (NIH) uses staff reviewers and doesn’t solicit lists of suggested reviewers. They accept a wide range of proposals but not as wide as the NSF.
What To Do When Your Reviewers Are Idiots
If you suspect that your reviewers lack expertise, contact your Program Director. My experience has been that Program Directors don’t respond to questions about reviewer expertise. A NSF attorney told me that Program Directors aren’t allowed to respond to such questions.
Next, talk to other SBIR applicants. Reddit has an r/SBIR forum.
Universities have programs to help faculty and graduate students apply for grants. In my experience all they do is help you file an application. They have no idea what to do if your application is rejected.
Grant writing consultants similarly will help you write an application but can’t help with rejections. Christine at E. B. Howard Consulting is an exception. You can read her post on r/SBIR. She repeats my complaints about reviewers lacking subject matter expertise, lacking training, and lacking commercialization experience, and adds complaints about conflicts of interest and racist/classist/sexist biases.
Freedom of Information Act (FOIA)
FOIA requests are free and easy. Most requests are filled in three or four months. Try an FOIA request to get the “comfort levels” for your proposal. The NSF will block your request but you tried.
National Science Foundation (NSF) Small Business Innovation Research (SBIR) proposal reviewers rate their “comfort level” of expertise with each proposal, on a four-point scale. This FOIA request is for the “comfort levels” of the reviewers for the following NSF SBIR proposal:
123456 (submitted 09/06/2023)
SBIR Program Solicitation NSF 23–515, VI. NSF Proposal Processing and Review Procedures states:
“When a proposal is declined, verbatim copies of reviews (excluding the names, institutions, or other identifying information of the reviewers)…are sent to the Principal Investigator…”
I am the Principal Investigator for the above proposals.
“Comfort levels” are part of reviews and do not identify reviewers. NSF should release “comfort levels” to Principal Investigators. “Comfort levels” don’t fall under the Henke* prohibition to release reviewers’ private information.
* Henke v. Department of Commerce et al., 83 F.3d 1445. D.C. Cir. 1996; https://www.nsf.gov/news/news_summ.jsp?cntn_id=100876.
When In Doubt, Sue!
My book How To Sue the Government shows how to file a lawsuit against a government agency pro se (without an attorney). My Medium blog post summarizes the book.
Numerous people have told me that there’s no chance of a pro se plaintiff winning a lawsuit against a federal agency. I don’t know if that’s true, but I’m certain that a plaintiff who doesn’t file a lawsuit isn’t going to win.