Are Your SBIR Reviewers Idiots?

Thomas David Kehoe
22 min readJan 7, 2024
National Science Foundation logo with chimpanzee scientist

All proposals are carefully reviewed by…three to ten other persons outside NSF…who are experts in the particular fields represented by the proposal.

— Small Business Innovation and Research (SBIR) Program Solicitation NSF 23–515, VI. NSF Proposal Processing and Review Procedures

see also Proposal & Award Policies & Procedures, Chapter III — NSF Proposal Processing and Review, B. Selection of Reviewers

A person who thinks they’re an expert in a field they know little about is an idiot.

How the NSF Actually Reviews Proposals

The SBIR review process begins with a program director gathering approximately eight proposals submitted in a single subtopic, for example, Agricultural Technology. One proposal might be about soil, another about plants, another about weeds, another about fertilizer effluent washing into rivers, etc.

The program director uses the key words from this collection of proposals to search a database of reviewers. The key words are in Project Summary in the Overview section. Select your key words carefully. Don’t use too broad terms or you will allow unqualified reviewers to review your proposal. Panel key words are not sent to Principal Investigators.

I examined this NSF reviewer database. The reviewers are almost entirely from academia. I saw a retired university president and an Amway salesman (not the same person).

The SBIR program is for small businesses to conduct scientific research that leads to commercial products. The program is not intended for university research. The SBIR program has been captured by universities, which are among our largest corporations. SBIR reviewers should be from industry. The ideal reviewers would be potential customers for your innovation. Your reviewers won’t be ideal. They likely won’t even be adequate.

A dozen or so reviewers are contacted and invited to join a panel. Typically, 25–30% of potential reviewers contacted agree to join the panel. Panels usually consist of three or four reviewers.

Reviewers are asked to rate their “comfort level” of expertise with each proposal, on a four-point scale. Program directors try to convene panels with many reviewers who are “comfortable” with many of the proposals. However, constraints sometimes make this impossible.

The “comfort levels” are not shown to the Principal Investigator.

Verbatim copies of reviews, excluding the names of the reviewers or any reviewer-identifying information, are sent to the Principal Investigator…

— Small Business Innovation and Research (SBIR) Program Solicitation NSF 23–515, VI. NSF Proposal Processing and Review Procedures

“Verbatim” means “the exact words” or “word-for-word.” “Comfort levels” of expertise are part of the reviews, not “reviewer-identifying information.” You are entitled to see the reviewers’ “comfort levels,” and the panel key words, but the NSF won’t share this information with you.

The NSF has never investigated whether there’s a correlation between “comfort levels” and scores. If reviewers with little expertise in the fields of proposals give lower scores, and experts give higher scores, then the playing field isn’t level.

If a key word search fails to find qualified reviewers for a proposal, the program director falls back on the subtopic to group the proposal with other proposals in the subtopic. The program directors rarely instead fall back on the suggested reviewers. More on this, below.

The panels meet for a day to discuss the proposals. Some panels meet remotely while others meet in-person. Remote panelists earn $200 per day. In-person, local panelists earn $280 per day. In-person panelists who must travel earn $480 per day plus travel expenses. This pay is recognized to be below competitive rates, i.e., reviewers are expected to work altruistically.

Panelists are expected to have general or “conversational” knowledge of the fields of the proposals or the subtopic. Deep expertise in any field is not required. A potential reviewer who has deep knowledge of one field of one proposal but lacks general knowledge of the fields of all the proposals will not be invited.

This panel design is used to encourage panelists to talk to each other. If one panelist has a question, another panelist may be able to answer the question. It is not unusual for reviewers’ minds to be changed during these discussions.

The panelists write individual reviews of each proposal and additionally a group review is written.

There is no guaranty that anyone on a panel will have expertise in the particular fields of any proposal. In other words, the NSF does not follow its own rules in choosing reviewers.

In addition to review panels, the NSF also uses “ad hoc” reviewers. These are individual reviewers who work alone and are selected for having deep knowledge of a proposal’s fields.

The NSF has no appeals process when an applicant suspects something is wrong with a proposal’s reviews.

Reviewer Expertise is Private Information

NSF staff cite the Privacy Act and Henke v. Department of Commerce et al., 83 F.3d 1445, to justify not revealing reviewers’ expertise to applicants. The NSF argues that, while expertise isn’t discussed in Henke, by extension expertise cannot be revealed because expertise might identify a reviewer.

The Privacy Act, 5 U.S.C. § 552a(b), lists twelve conditions in which the federal government may disclose private information about individuals. The eleventh condition, 5 U.S.C. § 552a(b)(11), is that a judge may issue a court order to a federal agency to reveal private information.

What’s Wrong With NSF’s Review Process

  1. The reviewers in the database are unqualified for the SBIR program. The reviewers are academics but the SBIR program is for industry. Some of the reviewers are retired. Some reviewers have no apparent scientific expertise, e.g., the Amway salesman.
  2. The solicitation doesn’t emphasize how important the key words section is. If you don’t choose your key words carefully you are inviting unqualified reviewers.
  3. Project Descriptions are too long, at fifteen pages (about 6500 words, or twenty normally formatted pages). In contrast, the NIH Research Strategy is six pages (3300 words, or about ten normally formatted pages). The DoEd Project Narrative is ten pages (4750 words). The NSF Project Description has redundant questions, in an order that’s hard to follow. Reviewers can’t read eight Project Descriptions in eight hours and retain what they read.
  4. Reviewers are overworked and underpaid. Reviewers are expected to read eight proposals in one day. Each proposal is about fifty pages, including the essential Project Description (fifteen pages). Reviewers are paid $25–35 per proposal. This is for $275,000 grants, or the reviewers are paid 0.001% (1/10,000) of the value of the decisions they’re making. Suppose there was a box you could check that would pay the reviewers $250, get two or three hours of their time, and your grant would be reduced to $274,000. Would you check the box?
  5. Falling back onto the subtopic when no qualified reviewers are found for your proposal is non-compliant with solicitations and NSF policy. Reviewers are required to have “expertise in the particular fields of the proposal,” not “conversational knowledge.”
  6. Reviewers aren’t told to refuse to review proposals that they’re not qualified to review.
  7. “Comfort levels” and the panel key words are not provided to the Principal Investigator. I’ve asked and I was refused. NSF policy states that Principal Investigators are to receive the “verbatim” reviews, that is, every word of the reviews, except for information that could identify individual reviewers. (“Comfort levels” and panel key words can’t identify reviewers.)
  8. Suggested reviewers are rarely consulted.
  9. Screenwriters are taught “show, don’t tell.” The NSF rule is “tell, don’t show.” No links to videos or apps are allowed. Reviewers are expected to read a description of an app when ten minutes playing around with an app would be better. A two-minute video can show what an innovation will do better than pages of descriptions.

“No Pilot Study Data”

One of the most frequent reasons my proposals have been rejected is “No pilot study data proving that the innovation is effective.”

The NSF doesn’t require pilot study data. Reviewers make up that rule.

Department of Education SBIR solicitations require that a prototype has been developed and pilot study data has been collected. The NSF has no such requirement.

“Incremental Modification of Established Products”

This is a real rule.

Project Activities Not Responsive to the Solicitation.

- Evolutionary development or incremental modification of established products or proven concepts;

- Straightforward engineering efforts with little technical risk;

If you developed a prototype or minimum viable product (MVP), you may get rejected for having an “established product.” But that’s not what the solicitation says:

NSF SBIR/STTR proposals are often evaluated via the concepts of Technical Risk and Technological Innovation. Technical Risk assumes that the possibility of technical failure exists for an envisioned product, service, or solution to be successfully developed. This risk is present even to those suitably skilled in the art of the component, subsystem, method, technique, tool, or algorithm in question. Technological Innovation indicates that the new product or service is differentiated from current products or services; that is, the new technology holds the potential to result in a product or service with a substantial and durable
advantage over competing solutions on the market. It also generally provides a barrier to entry for competitors. This means that if the new product, service, or solution is successfully realized and brought to the market, it should be difficult for a well-qualified, competing firm to reverse-engineer or otherwise neutralize the competitive advantage generated by leveraging fundamental science or engineering research techniques.

The NSF can fund established products if the proposed innovations carry “technical risk” or represent “technological innovation.”

The problem is that “technical risk” and “technological innovation” are not obvious to a person who knows little or nothing about your field. To prevent rejection on these grounds, for each of your Technical Objectives and Challenges, write a sentence or two starting with “The technical risk is…” and another sentence of two starting with “The technological innovation is…”

These paragraphs go into your Project Pitch in the Technical Objectives and Challenges and in the Project Description in the Intellectual Merits section under the subsection “Describe the key objectives to be accomplished…” You can use the same paragraphs in both your Project Pitch and your Project Description.

Looking back on the work you’ve done can’t disqualify your proposal. Reviewers must look forward to the work you propose to do.

Spot the Mystery Rule

Flag reviews that mark you down for non-existent rules.

My most recent proposal was marked down because my budget justification failed to justify the 50% indirect costs and the 7% small business fee. The solicitations explicitly state that no justification is needed for these items.

Email your Program Director and politely ask for an explanation of “spot the mystery rule” reviews. In my experience, Program Directors will not respond to such a request. If they did, and they admitted that your proposal was marked down due to a non-existent rule, then you would have admission of non-compliance with solicitations and policies.

Reviews Show Reviewers’ Expertise, or Lack Thereof

Your reviews may be prima facie evidence that your reviewers aren’t qualified.

  • TLDR. Some reviews show no evidence that the reviewer read the proposal beyond the first paragraph. This is often combined with “spot the mystery rule,” i.e., the reviewer read the first paragraph, spotted a mystery rule, and rejected the proposal without reading further.
  • Expertise expired. These reviews show knowledge of a field of a proposal, circa 1973. If a reviewer is not current in a field, they’re not an expert.
  • Special ignorance of the science and engineering subfields. These reviews contain gems belying ignorance of the fields of the proposals. Examples from my reviews include that memorizing useful phrases is an effective way to become fluent in a language, that Chinese is written with an alphabet, and that Arabic is written with pictures.
  • Confusing prior research with your proposal. Several reviews confused summaries of previous research in the field with what I intended to do. E.g., my proposals discuss brain imaging research. Several reviewers thought that I intended to build an app that would scan users’ brains.

I’ve never seen a review discuss my commercialization plans. This omission shows that the reviewers had expertise in small business management and that the NSF database of reviewers is not appropriate for the SBIR program.

TLDR, Again

I often see reviews say that something wasn’t addressed in the proposal, when a paragraph in the proposal addressed that issue. Reading eight proposals in a day is beyond human capabilities. You can’t blame the reviewers but you can blame the review process. Email the Program Director pointing out each tl;dr example.

Are Scores Random?

If you’ve submitted your proposal more than once, you should see the scores go up. Email the Program Director and ask why your scores went down. Summarize how your recent proposal was better than your previous proposal, e.g., you hired a grant writer to help you, or you addressed a concern expressed in a review of the previous proposal.

A substantial drop in scores is evidence that the reviewers of the recent proposal were not qualified.

Suggested Reviewers

Solicitations include a section for suggested reviewers outside the NSF. A Program Director told me that suggested reviewers are rarely consulted.

I sent an email to Carol Bessel (Section Head, i.e., the Program Directors’ supervisor). I suggested falling back on the suggested reviewers instead of the subtopic when a Program Director is unable to find qualified reviewers. This would ensure that every proposal is reviewed by experts in the particular fields of the proposal. In contrast, falling back to the subtopic finds reviewers who have only limited expertise (“conversational knowledge,” to use a Program Director’s phrase) or no expertise at all.

I suggested that making a Suggested Reviewers form with boxes to check whether each suggested reviewer is a potential grant provider, investor, or customer. Such a review would be worth one hundred reviews from academics.

It is difficult for a small, unknown startup to connect with large organizations. I presume that a call from the NSF is more likely to be returned, especially if the call is to a federal agency or to an organization such as Khan Academy that is interested in science. A call from a Program Director could “open doors” for the applicant, leading to a grant, an investment, or an order. Even a declined proposal could benefit an applicant.

The solicitations state, “Reviewers who have significant personal or professional relationships with the proposing small business or its personnel will should generally not be included.” It’s easy enough to insure that this rule is followed. Instead of submitting the names of specific suggested reviewers, submit an organization where qualified reviewers could be found. For example, I put this into my Suggested Reviewers form:

Department of Defense, Defense Language Institute Foreign Language Center (DLIFLC)
Jane Doe, Academic Advisor for Ed & Info Tech
Voice: (555) 123–4567
Email: Jane.Doe@army.mil
Ms. Doe is not suggested as a reviewer but she can find a qualified reviewer at the DLI.

The Program Director can easily ask this administrator to find a qualified reviewer.

It may take more time for program directors to contact suggested reviewers but if even one in one hundred suggested reviewers provides a grant, an investment, or an order then this suggested reform would pay for itself with reduced NSF spending.

Carol Bessel did not respond to my suggestions.

The NIH doesn’t allow suggested reviewers for SBIR proposals.

What the NSF Should Do To Improve Reviews

  1. SBIR reviewers should be from industry, not academia. The Department of Education SBIR instructions say, “Although letters from university professors or individual educators often speak to the significance of an approach, these writers often lack experience with or a connection to the commercialization process and as a result, such letters often do not provide a viable plan or establish that pathways toward commercialization are available on a wide enough scale.”
  2. The Suggested Reviewers form should have checkboxes for reviewers who are potential funders or customers. Encourage Principal Investigators to suggest organizations where qualified reviewers can be found, not individual names.
  3. The solicitation should emphasize how important the key words section is.
  4. Project Descriptions should be shorter. Set a word limit, not a page limit. Formatting rules should follow proven readability guidelines, e.g., bigger fonts, narrower margins, wider line spacing, two columns on a page, etc. 5000 words would be a good limit (about 75% of the present length).
  5. Make a checkbox that Principal Investigators can choose to reduce their budget by $1000 (e.g., from $275,000 to $274,000) and in return the Program Directors spend a few hours contacting suggested reviewers, and the reviewers spend a few hours reviewing the proposal (i.e., a panel reviews three proposals instead of eight proposals).
  6. When no qualified reviewers can be found, don’t fall back on the subtopic. Find qualified reviewers.
  7. Train reviewers to refuse to review proposals that they’re not qualified to review.
  8. Release the reviewer expertise “comfort levels” and the panel key words to the Principal Investigator.
  9. Allow links to videos, websites, and apps. “Show, don’t tell.”
  10. Allow Principal Investigators to record their Project Descriptions, like audiobooks. Reviewers could listen to the Project Descriptions while walking their dogs and then be better prepared when they read the proposals.

Other SBIR Agencies

The Department of Education (DoEd) promises that proposals will be reviewed by “research scientists and education technology experts from the agency or other federal agencies.” Lists of suggested reviewers are not solicited. The DoEd funds only seventeen SBIR proposals annually. (NSF funds about four hundred annually.) Until recently, DoEd accepted proposals only for instruments for classroom use that had pilot study data. With this narrow scope they needed only limited range of expertise. The DoEd has opened up their SBIR program to wider topics. We’ll see if they can maintain the quality of reviews, or if they’ll need to ask proposers for suggested reviewers.

The National Institutes of Health (NIH) uses staff reviewers and doesn’t solicit lists of suggested reviewers. They accept a wide range of proposals but not as wide as the NSF. Both agencies employ around 1700 employees.

What To Do When Your Reviewers Are Idiots

If you suspect that your reviewers lack expertise, contact your Program Director. My experience has been that Program Directors don’t respond to questions about reviewer expertise. A NSF attorney told me that Program Directors aren’t allowed to respond to such questions.

Next, talk to other SBIR applicants. Reddit has an r/SBIR forum.

Universities have programs to help faculty and graduate students apply for grants. In my experience all they do is help you file an application. They have no idea what to do if your application is rejected.

Grant writing consultants similarly will help you write an application but can’t help with rejections. Christine at E. B. Howard Consulting is an exception. You can read her post on r/SBIR. She repeats my complaints about reviewers lacking subject matter expertise, lacking training, and lacking commercialization experience, and adds complaints about conflicts of interest and racist/classist/sexist biases.

Freedom of Information Act (FOIA)

FOIA requests are free and easy. Most requests are filled in three or four months. Try an FOIA request to get the “comfort levels” for your proposal. The NSF will block your request but you tried.

National Science Foundation (NSF) Small Business Innovation Research (SBIR) proposal reviewers rate their “comfort level” of expertise with each proposal, on a four-point scale. This FOIA request is for the “comfort levels” of the reviewers for the following NSF SBIR proposal:

123456 (submitted 09/06/2023)

SBIR Program Solicitation NSF 23–515, VI. NSF Proposal Processing and Review Procedures states:

“When a proposal is declined, verbatim copies of reviews (excluding the names, institutions, or other identifying information of the reviewers)…are sent to the Principal Investigator…”

I am the Principal Investigator for the above proposals.

“Comfort levels” are part of reviews and do not identify reviewers. NSF should release “comfort levels” to Principal Investigators. “Comfort levels” don’t fall under the Henke* prohibition to release reviewers’ private information.

* Henke v. Department of Commerce et al., 83 F.3d 1445. D.C. Cir. 1996; https://www.nsf.gov/news/news_summ.jsp?cntn_id=100876.

How to Complain

I’m not an attorney and the following isn’t legal advice.

The mission of the NSF Office of the Inspector General is to

provide independent oversight of the National Science Foundation to improve the effectiveness, efficiency, and economy of its programs and operations and to prevent and detect fraud, waste, and abuse.

What they actually do, according to the most recent semi-annual report they send to Congress, is to find about $500,000 of questionable spending by grantees. That’s about 0.01% of the grants the NSF makes.

(Our local newspaper had a story about storm chasers who defrauded $2.4 million from the NSF, NASA, and NOAA.)

They also investigate scientific fraud, although I didn’t see any in their most recent report.

The document the defendent writes in a lawsuit is called a complaint. If you’re suing the government, it’s an administrative complaint.

Filing an administrative complaint on the OIG hotline website is free and easy. The OIG responded in a few days, referring me to a Program Director. The Program Director responded quickly and wanted to help but was baffled as to what the OIG expected him to do. Investigating alleged NSF staff misconduct is the OIG’s job, not the Program Directors’.

I filed a second complaint, with the Program Director’s response. The OIG responded that they were not going to investigate.

When In Doubt, Sue!

You can’t just call your local law firm or ask your college friend who’s an attorney. Suing a federal agency is a specialized field that few law firms outside of Washington, DC, can handle. Search law firms for the phrase “federal question jurisdiction.”

I filed a lawsuit against the NSF pro se (without an atttorney). So far I’ve spent only $20 at the post office on Certified Mail.

Start by reading An Introduction to Judicial Review of Federal Agency Action, by Jared P. Cole, published by the Congressional Research Service on December 7, 2016.

Jurisdiction

For me, jurisdiction was the most difficult, head-scratching, lack of information available part of filing a lawsuit. It might look like “boilerplate” but if you don’t get it right your case will be dismissed.

First, there’s diversity jurisdiction and federal question jurisdiction.

Diversity jurisdiction is for suing a defendant other than the federal government where the parties live in different states and the claim is for monetary damages over $75,000. These cases are filed in the U.S. District Court in your state.

Federal question jurisdiction is for cases dealing with a question about the U.S. Constitution, a U.S. treaty, federal law or statute, or where you are suing a federal agency. Federal agencies are residents of no states so diversity jurisdiction doesn’t work.

Federal question jurisdiction is handled by two courts, under three statutes. (Actually, more than two courts, but two main courts.)

Three statutes allow citizens to sue federal agencies:

The Administrative Procedures Act (APA, 5 U.S.C. § 555) allows citizens to compel federal agencies to do something or not do something. This called mandamus relief. The APA does not authorize money damages as a remedy. APA cases are heard in U.S. District Court.

The Tucker Act (28 U.S.C. § 1491) is for breach of contract. Money damages over $10,000 are permitted. A six-year statute of limitations applies. The Tucker Act covers explicit contracts (a written contract that both parties signed) and implicit contracts. Principal Investigators and Program Directors have implicit contracts. Your consideration was the significant time you expended writing your proposals. “Reliance on the promise” can also be consideration. Tucker Act cases are heard in the Court of Federal Claims.

The Federal Tort Claims Act (FTCA, 28 U.S.C. Part VI, Chapter 171 and 28 U.S.C. § 1346) covers wrongful acts or infringement of rights committed by employees of federal agencies. The agency is the defendant, not the employees. The FTCA permits money damages as a remedy. Claims must be filed within two years of the events. Interest isn’t allowed. Torts are mostly personal injury cases. Maybe an attorney could stretch the definition of tort to cover idiot reviewers but I doubt it. I don’t know what court hears FTCA cases.

There’s also a “Little Tucker Act” for U.S. District Courts to hear cases in which monetary damages are under $10,000.

In my case, I want mandamus relief, in the form of a judge ordering the NSF, under the Privacy Act, to reveal the reviewers’ credentials. I’ve filed a lawsuit in U.S. District Court under the APA.

If I get this court order, and the reviewers’ credentials don’t show expertise in the particular fields of my proposals, then I’ll file a second lawsuit in Federal Claims Court under the Tucker Act asking for monetary damages.

File a Lawsuit in U.S. District Court

Look for a website for “U.S. District Court for the District of” your state. On their website look for a web page for pro se plaintiffs. Now look for a guide book like Colorado’s Guide to Civil Legal Cases: For People Who Don’t Have a Lawyer. In Northern Texas the guide is Pro Se Handbook for Civil Suits.

My court’s guidebook is for diversity jurisdiction. It has little or no information about federal question jurisdiction. I haven’t found a guide for pro se plaintiffs to sue under federal question jurisdiction.

The first part of a complaint lists the parties:

A. PLAINTIFF INFORMATION

Your name and contact info.

B. DEFENDANT INFORMATION

National Science Foundation
2415 Eisenhower Avenue
Alexandria, VA 22314
(703) 292–5111
info@nsf.gov

The National Science Foundation is an independent Federal agency.

C. JURISDICTION

28 U.S. Code § 1331 gives district courts jurisdiction for civil actions under the federal laws. According to “An Introduction to Judicial Review of Federal Agency Action,” by Jared P. Cole, 28 U.S.C. Section 1331 “authorizes federal courts to hear claims arising under the APA as well as ‘nonstatutory’ and constitutional claims.”

You must also cite a statute that allows you to sue an agency. The statute that created the NSF is 42 U.S.C. §1861.

The following paragraphs should cover the “C. JURISDICTION” section.

√ Federal question pursuant to 28 U.S.C. § 1331.

List the specific federal statute, treaty, and/or provision(s) of the United States Constitution that are at issue in this case.
• Administrative Procedure Act of 1946 (APA, 5 U.S.C. § 551).
• Privacy Act (5 U.S.C. § 552.)
• National Science Foundation Act (42 U.S.C. §1861).
• Federal Grant and Cooperative Agreement Act (31 U.S.C. §§ 6301)
• NSF Proposal and Award Policies and Procedures Guide (PAPPG, Title 45 of the CFR, Part 600)
• NSF Small Business Innovation and Research (SBIR) Program Solicitations NSF 19–554, NSF 20–527, NSF 21–562, NSF 22–551, and NSF 23–515
• Freedom of Information Act (FOIA) (5 U.S.C. § 552)
• Federal Rules of Civil Procedure

Add a section “Administrative Remedies Exhausted.” Detail the steps you’ve taken so far, such as contacting your Program Director, filing an FoIA request, or filing an administrative complaint with the OIG.

D. STATEMENT OF CLAIMS

Your claim must be a final agency action. The final agency action is that the NSF failed to fund your proposal. Make one claim for each declined proposal. Make another claim for any costs you incurred, such as a grant writer.

After making these claims, you can narratively show the steps that led up to this final agency action, highlighting the flawed step(s).

This Administrative Complaint alleges National Science Foundation (NSF) staff misconduct and non-compliance with solicitations and policies for the Small Business Innovation Research[1] (SBIR) program in decisions to not fund the plaintiff’s proposals.

The complaint alleges that the agency failed to follow its own rules in choosing reviewers. NSF program directors selected reviewers who lacked expertise in the fields of the plaintiff’s proposals. Reviews were vague, confused, inconsistent, and lacked actionable items. Reviewers showed ignorance of solicitations and policies.

This complaint does not allege that reviewers gave low scores when they should have given high scores, but rather that reviewers were not qualified to understand the proposals.

This complaint does not suggest that NSF staff targeted the plaintiff but rather that the proposal review process is flawed. This complaint suggests reforms to make the proposal review process compliant with solicitations and policies.

[1] The NSF awards approximately four hundred SBIR grants annually, with an average award of $500,000, or $200 million of the NSF’s $10 billion budget. Typically one in three SBIR proposals is funded.

Copy and paste the relevant sections of NSF solicitations and policies.

Small Business Innovation and Research (SBIR) Program Solicitation NSF 23–515
VI. NSF Proposal Processing and Review Procedures
“All proposals are carefully reviewed by…three to ten other persons outside NSF…who are experts in the particular fields represented by the proposal.”
https://www.nsf.gov/pubs/2023/nsf23515/nsf23515.htm

SBIR Review Process
“All proposals are carefully reviewed by a minimum of three experts in the particular fields represented by the proposal.”
https://seedfund.nsf.gov/resources/review/review-process/

Proposal & Award Policies & Procedures Guide (PAPPG)
Chapter III — NSF Proposal Processing and Review
B. Selection of Reviewers
“The NSF guidelines for the selection of reviewers are designed to ensure selection of experts who can give Program Officers the proper information needed to make a recommendation in accordance with the NSB-approved criteria for selection of projects. Optimally, reviewers should have:
“1. Special knowledge of the science and engineering subfields involved in the proposals to be reviewed to evaluate competence, intellectual merit, and utility of the proposed activity.”
https://www.nsf.gov/pubs/policydocs/pappg22_1/pappg_3.jsp#IIIA2

You can’t sue under the PAPPG alone. It says optimally and should. The NSF can claim discretion to do whatever it wants.

You can sue under the PAPPG and a solicitation. Explain that Program Directors are allowed to narrow or specify solicitation rules under the broad or equivocal PAPPG rules. Solicitation rules carry the same weight of law as the PAPPG. Investigators and Program Directors have to follow the solicitations.

Write a section about the review panels, similar to what I wrote, above.

Write a section about the Privacy Act.

NSF staff cite the Privacy Act and Henke v. Department of Commerce et al., 83 F.3d 1445 to justify not revealing reviewers’ expertise to applicants. The NSF argues that, while expertise isn’t discussed in Henke, by extension expertise cannot be revealed because expertise might identify a reviewer.

5 U.S.C. § 552a(b) lists twelve conditions in which the federal government may disclose private information about individuals. The eleventh condition, 5 U.S.C. § 552a(b)(11), is that a judge may issue a court order to a federal agency to reveal private information. This complaint requests such a court order (see below).

I added a section contending that reviewers are unaware of the expertise requirement.

The plaintiff reviewed the training materials for reviewers linked to the webpage “Seeking Technical and Commercial Experts in Technology Commercialization and Translation”:
https://seedfund.nsf.gov/resources/review/
and the video “The Art and Science of Reviewing Proposals”:
https://tipsforreviewers.nsf.gov/
The plaintiff did not find instructions telling reviewers to refuse to review a proposal if they lack expertise in the particular fields of the proposal.

Write a section about what you saw in the reviews that makes you suspect that the reviewers aren’t experts.

The following prima facia evidence is not intended to prove that reviewers were unqualified but rather to raise doubt as to reviewers’ qualifications and to justify an investigation.

The reviews reveal several patterns belying lack of special knowledge or expertise in the particular fields of the proposals.

You can quote reviews, or summarize in paragraphs such as “Spot the mystery rule,” “TLDR,” “Special ignorance of the science and engineering subfields,” etc.

I put in the bar charts showing an expected upward progression and the actual random scores.

I wrote a section about suggested reviewers, as you read above. This is a long section but is the crux of the lawsuit—when the Program Directors can’t find qualified reviewers they could fall back on the suggested reviewers but they instead fall back on the subtopic.

Next comes an essential section, “Harm Caused by Defendant.” Detail the story of your project, why you applied for NSF grants, what alternatives you forewent, and encouragement you got from Program Directors (including acceptance of Project Pitches). You must show that you were harmed.

I wrote that I chose not to work with private investors because I received encouragement from Program Directors that I my project would be funded. One Program Director called to tell me that my proposal was about to be funded, and sent paperwork for me to receive the funds. Four months later the proposal was rejected.

E. REQUEST FOR RELIEF

This is what you’re asking the court to do. It’s a little complicated because you’re asking for several things.

First, if you were unable to get “comfort levels” via a FOIA request, ask the judge to order the NSF to release the “comfort levels.” The APA doesn’t have discovery.

I also asked for the panel keywords, but these are less important.

Thirdly, ask for the personal information about each reviewer.

Each reviewer’s education, affiliation, and other special knowledge or particular expertise, plus contact information to verify veracity.

The “comfort levels” are likely sufficient to prove your case. You likely won’t need reviewers’ identifying information, or maybe just for one or two reviewers.

Say that the “comfort levels” are not covered by the Privacy Act, and are the part of the reviews and so are required under the “verbatim” clauses of the solicitations to be provided to grant applicants.

Finally, state the value of the denied grant applications and any other expenses, such as paying a grant writing consultant.

I added two appendices. The first listed which reviews showed prima facie evidence of lack of expertise.

1938503 (2019) #1, Fair, “Spot the mystery rule,” “TLDR.”

The second appendix listed each proposal and the scores from each review, with bold type for reviews with prima facie evidence of lack of expertise.

Numerous people have told me that there’s no chance of a pro se plaintiff winning a lawsuit against a federal agency. I don’t know if that’s true, but I’m certain that a plaintiff who doesn’t file a lawsuit isn’t going to win.

--

--

Thomas David Kehoe

I make technology for speech clinics to treat stuttering and other disorders. I like backpacking with my dog, competitive running, and Russian jokes.