2012 was a milestone year for Technology-Assisted Review (TAR), featuring the first judicial opinions expressly supporting its use by producing parties in litigation. Naturally, there has been a lot of excitement among vendors and e-discovery lawyers. But, despite these historic decisions, there remains little case law addressing how a producing party can use TAR and meet its discovery obligations. The technologies are just starting to be better understood by lawyers and courts (as this author has previously written). As a result, there is a dearth of guidance on best practices in this nascent legal arena.

Not surprisingly, the first few cases addressing TAR have cautiously embraced its use. These decisions collectively promote a high level of cooperation and transparency, including the involvement of opposing counsel in training the system and the sharing of the set of documents used to train the system (referred to as the seed set). The concern among some TAR advocates is that these practices exceed what is required under the Federal Rules of Civil Procedure and that, if these levels of transparency come to represent the minimum legal threshold of cooperation for using TAR, producing parties will be dissuaded from using TAR as a result of the added costs and litigation risks.

The first of these early adopter cases was Da Silva Moore v. Publicas Groupe SA, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012), in which Magistrate Judge Peck historically concluded that “[c]omputer-assisted review now can be considered judicially-approved for use in appropriate cases.” Id. at *12. The court explained that “[t]he decision to allow computer-assisted review in this case was relatively easy” because the parties had “agreed to its use” and merely “disagreed about how best to implement such review.” Id. at *8. But the court seemed to forecast the core disputes of the next set of TAR case law: “[I]t is the process used and the interaction of man and machine that the courts needs [sic] to examine.” Id.

The plaintiffs had objected to the TAR methodology proposed by defendants on the basis that “there is no way to be certain if [defendant]’s method is reliable” and that the defendants’ proposed method “fails to include an agreed-upon standard of relevance.” Id. Effectively, Plaintiffs’ position was analogous to objecting to a producing party’s traditional linear review as unreliable because they had not been provided with a copy of the producing party’s document review protocol. And, the Court seemed to agree, stating that “[t]he issue regarding relevance standards might be significant if [defendant]’s TAR proposal was not totally transparent.” Id. Similarly, in In re Actos (Pioglitazone) Products Liability Litig., No. 6:11-md-2299 (W.D. La. July 27, 2012), the court issued an order governing the production of ESI that comprehensively detailed how the parties would use TAR during the search and review of ESI. The court repeatedly emphasized the theme of cooperation and ordered that the parties collaborate during both the training and quality control phases of the TAR process. And, in EOHRB, Inc. v. HOA Holdings LLC, C.A. No. 7409-VCL (Del. Ch. Oct. 19, 2012), a state court judge sua sponte ordered the parties to show cause why they should not select and share a single TAR vendor.

These heightened levels of transparency may concern TAR advocates because they arguably promote cooperation requirements for TAR use that exceed what is required by a producing party’s discovery obligations and what would be asked of a producing party performing a keyword-based linear review. As the court in Da Silva Moore recognized, Rule 26(g)(1)(B) “is the provision that applies to discovery responses,” 2012 WL 607412, at *7. That provision provides the requesting party with assurances of reliability because it requires that a producing party certify that its response to document requests is reasonable, proportional and otherwise “consistent with these rules and warranted by existing law or by a non-frivolous argument for extending, modifying, or reversing existing law, or for establishing new law.” Fed. R. Civ. P. 26(g)(1)(B)(i). Taken in conjunction with Rule 34(b)(2) — which provides that a producing party must agree to produce all documents requested unless it objects to that request or a part thereof — the Rules provide a strong foundation for a requesting party to feel comfortable that a producing party’s use of TAR will be reliable. See Fed. R. Civ. P. 34(b)(2)(B)-(C).

However, the reality is that most requesting parties need additional assurances that highly relevant documents will not be withheld or overlooked, and that tension often leads to costly adversarial discovery fights. This tension is exacerbated when TAR is involved due to the lack of familiarity with the technologies and relatively untested waters surrounding its use. Courts, consistent with The Sedona Cooperation Proclamation model, have increasingly encouraged cooperation among litigants to avoid discovery fights in general. But that does not mean that producing parties who decide to use TAR should bend over backwards or open up their discovery playbook to opposing counsel just so they can avail themselves of TAR’s cost benefits. After all, producing parties are generally not required to produce their document review protocol nor are they required to produce sets of non-responsive documents to assuage the requesting party’s reliability concerns.

Ultimately, the court in Da Silva Moore found that the use of TAR was appropriate in that case and emphasized that the parties’ counsel should cooperate in designing “an appropriate process, including use of available technology, with appropriate quality control testing, to review and process relevant ESI while adhering to Rule 1 and Rule 26(b)(2)(C) proportionality.” 2012 WL 607412, at *7. The court explained its view that “the best approach to the use of computer-assisted coding is to follow The Sedona Cooperation Proclamation model,” which the Court interpreted as requiring a producing party using TAR to “[a]dvise opposing counsel that [it] plan[s] to use computer-assisted coding and seek agreement [and] if you cannot, consider whether to abandon predictive coding for that case or go to the court for advance approval.” Id. at *3.

The Sedona Principles state that “parties should confer early in discovery regarding the preservation and production of electronically stored information when these matters are at issue in the litigation and seek to agree on the scope of each party’s rights and responsibilities.” The Sedona Conference, The Sedona Principles, Second Edition: Best Practices Recommendations & Principles for Addressing Electronic Document Production (Principle Six) (2007 Annotated Version). And, as David Cross has written here, The Sedona Cooperation Proclamation has led to cooperation becoming, both in the case law and in practice, a key pillar of discovery discussions. The TAR case law seems to be following the same path. Consequently, having counsel that is experienced with how to creatively leverage TAR is critical to ensuring fruitful cooperation with opposing counsel (and the court) in order to agree on a TAR process that is reliable and defensible but not unnecessarily transparent and intrusive.

The process that received a judicial stamp of approval in Da Silva Moore was characterized by the significant involvement of the requesting party up-front. But, there are myriad other ways to implement TAR that still meet a party’s discovery obligations while providing the requesting party and the court with assurances of reliability. Many can be less expensive, less intrusive and, depending on the adversarial nature and technological savvy of opposing counsel, just as defensible if not more so.

One of these alternative approaches to cooperation is for the parties to allow the producing party to select and design its TAR technology and process. The parties would focus on negotiating an agreed-upon process for quality control testing of the results, with a focus on the null set that is ultimately not produced. For example, the parties could agree that the producing party will review: (1) a random sample of documents that the system codes as not responsive; (2) a small number of randomly selected documents that are captured by an agreed-upon set of search terms applied to the documents that the system codes as not responsive; or (3) a combination of both. After all, the court in Da Silva Moore agreed that “[k]eywords have a place in production of ESI” when you use TAR. Id. at *10.

This type of approach parallels the processes used with keywords — instead of a producing party sharing its relevance determinations (or review protocol) up-front, it performs iterative testing of the results on the back end. Moreover, this type of approach leverages the advantages of the available technologies, including the speed with which they code voluminous document universes (allowing both parties to focus on reviewing the documents earlier in the discovery schedule), the effectiveness with which they cull out irrelevant noise, and the ability for documents identified as responsive during the quality control iterative testing of the null set to serve as a “supplemental seed set.” The system could use these supplemental seed sets to then identify further responsive documents until the quality control testing reveals no highly relevant documents.

And consistent with the proportionality principle that is increasingly emphasized by courts, as was done by the court in Da Silva Moore, the parties can also negotiate some type of cost shifting. Some of the more popular technologies in the market do not predictively code documents but rather assign scores to each document and then rank the documents based on this relevancy score (hence, the term ‘technology-assisted review’ rather than ‘predictive coding’). When using one of these technologies, the parties can agree that the producing party will review all documents in descending order of relevancy score and, when the ratio of responsive to non-responsive documents drops to an agreed-upon level, the producing party does not have an obligation to continue unless the requesting party contributes to the costs of that review. Effectively, the requesting party is paying for a higher level of reliability if it so desires.

Both of these approaches are designed to incentivize the producing party to train the system up-front to deliver as high levels of recall and precision as possible. The less effective the system is, the more manual review that the producing party will have to perform. These incentives are generally not as aligned across sides in circumstances where the parties negotiate up-front about training the system and access to seed sets. Moreover, the time and costs saved by both parties on the front end by not fighting about the seed set or about experts retained to explain the “black box” of the technology are instead used to perform iterative quality control testing on the back end of the review. This type of process ensures transparency, cooperation, and reliability. In addition, it adheres to Rule 26(b)(2)(C)’s proportionality principle and Rule 1’s emphasis on the “the just, speedy, and inexpensive determination of” every litigation. And, ultimately, it allows a producing party to benefit from the cost benefits of using TAR while meeting its discovery obligations.