Third Annual Patent Colloquium: Sequential Innovation and Patent Pooling

Sequential innovation is exactly what it sounds like, one innovation building on another. In general this is something we would like to encourage because further innovation helps improve on the original product, and in an ideal world makes everyone better off. The central question for patent law is how to design laws to encourage sequential innovation by providing fair compensation for each innovator.

Professor James Bessen was unfortunately unable to give his presentation due to a last minute illness, but Professor Rebecca Eisenberg was able to provide a highly informative summary of his main points. She started by drawing on historical examples to highlight the importance of sequential innovation. The evidence suggests that there are often four to five decades between when a pioneering innovation first occurs and when the industry making using of that innovation fully takes off. An analysis of the history of the power loom shows that in terms of yards of cloth produced per hour, most of the major improvements in the technology occurred decades after the original innovation. This history suggests that in terms of social utility the original power loom had relatively little value, and that it was only through incremental improvements that it became truly useful. Of course these improvements also create value for the original innovator, as more and more people use the product. Thus, in an ideal situation sequential innovation is a win-win situation. With this is mind, Professor Bessen’s work suggests that a version of patent law that takes the importance of sequential innovation seriously will likely be less strict than our current one, doing more to encourage imitation as a means towards better results for all involved.

Professor Nancy Gallini provided three basic scenarios in which to think about sequential innovation occurring. In the first scenario, a single inventor holds a patent on which someone else seeks to improve. Most of the time, the two parties will enter into an ex-ante license agreement, which benefits both of them and avoids potential disputes.  Professor Gallini pointed to gene patents as a fairly good real life example of this scenario.  In the second scenario, a large number of patent holders all hold patents essential to the product on which an industry seeks to improve. Audio compression technology is an example of this. Ex-ante licensing can be almost impossible in this scenario because there are so many patent holders to negotiate with, and so there is a danger of an anti-commons forming. One solution is patent pooling, gathering together all of the relevant patents into a single pool which can then be bargained with collectively with relative ease. The third scenario, referred to as a ‘patent thicket’ is an incredibly messy mix in which most patent holders are also innovators, and there is not necessarily a clear sequence in which innovation occurred, and there is no clear standard set of essential patents from which to license. Nanotechnology is a good example, as is the telecom industry, in which almost every major innovator is suing every other for patent infringement, which can create a strong disincentive to innovate.

Professor Eisenberg discussed some of the ways in which patent law tries to deal with the potential disincentives to innovate caused by the messiness of patent thickets. One way is the requirements to acquire a patent in the first place. The patentability requirements that all patents be novel, non-obvious, and useful help create a system that encourages sequential innovation.  She noted that the general trend is to have broad patents on pioneering breakthroughs followed by narrower and narrower patents as a field develops. When there is only one patent holder (the original breakthrough innovation) licencing is fairly easy. As the number of patents expands, each successive patent is driven to increasing narrowness as a direct result of the requirements of novelty, usefulness, and non-obviousness. The narrower a patent, the easier it is to innovate around it rather than licencing it, thus somewhat limiting the problem of the patent thicket. Professor Eisenberg also noted that in the real world asserting patents requires a lot of money, and so frequently innovators simply infringe and aren’t sued. Thus, some of the problems of the theoretical problems of the patent thicket do not result in a real life reduction in innovation. However she noted the specter of patent assertion entities (PAEs or ‘patent trolls’) may pose a problem to this practical solution.

Professor Gallini then discussed how encouraging sequential innovation creates incentives not just to innovate, but also, crucially, to negotiate. Referring back to her earlier discussion, Professor Gallini noted that in the first scenario of a single patent holder and single innovator, and sometimes in the second scenario, of a patent pool, negations often occur and everyone is better off.  She pointed to the chemical, pharmaceutical, and mechanics industries as examples that tend towards successful negotiation. Where problems develop is in the third scenario of a patent thicket when it far from clear who the relevant patent holders are. She discussed three possible solutions to this problem, to: 1) consider having different patent laws for different industries; 2) create laws to encourage negotiation in thicket areas, perhaps by changing anti-trust laws; and 3) try to enforce stricter clarity requirements in patents to make the thicket less dense for innovators.

The panel concluded with a discussion of patent pools as a way to combat patent thickets. The idea is that if all the patents requiring licensing to innovate in a given area can be identified, the patents can be grouped together in a single ‘pool’ which can be negotiated and licensed collectively. The benefits of a patent pool is that it can allow for efficient pricing, pools risk, limits litigation, and helps smaller firms that are reliant on larger firms for regulatory expertise, marketing and distribution. However, patent pools also have costs. They can harbour weak patents, reduce incentives to innovate, foreclose rivals in upstream markets, and allow improper collaboration on price setting outside the scope of the patent pool.

To try to balance these competing concerns, Professor Gallini suggested a few indicators anti-trust authorities can use to determine whether a patent pool is truly helping to cut through a patent thicket, or merely allowing for collusion: First, to ensure that all the patents in the pool are valid; second, to ensure that all the patents are complimentary (i.e.that they add something to the pool) rather than merely a substitute (a parallel innovation to another patent in the pool); third, to see whether the pool allows for licensing individual patents independently of the patents in the pool as a whole; and fourth, whether  the patent pool licensing scheme includes a grant-back arrangement (an agreement where the licence agrees to provide the licensor some right – usually a licence- on improvements to the licenced technology). While agreeing with these suggestions, Professor Eisenberg noted that distinguishing valid from invalid patents is frequently very costly and complicated.

Overall, the panel concluded that the interplay between patent law, anti-trust law, and licencing negotiations affects innovation, and effective regulation must take into account the complex relations between these factors.