Chapter 4: Copyright Protection of Software:
Functionality, Constraints, and
Merger of Idea and Expression

Baker v. Selden
United States Supreme Court
101 U.S. 99 (1879)


Morrissey v. Proctor & Gamble Co.
United States Court of Appeals for the First Circuit
379 F.2d 675 (1st Cir. 1967)

Aldrich, Chief Judge.

This is an appeal from a summary judgment for the defendant. The plaintiff, Morrissey, is the copyright owner of a set of rules for a sales promotional contest of the “sweepstakes” type involving the social security numbers of the participants. Plaintiff alleges that the defendant, Procter & Gamble Company, infringed, by copying Rule 1.

We recite plaintiff's Rule 1, and defendant's Rule 1, the italicizing in the latter being ours to note the defendant's variations or changes.

Morrissey's Rule 1

P & G's Rule 1

The district court, took the position that since the substance of the contest was not copyrightable, which is unquestionably correct, Baker v. Selden, and the substance was relatively simple, it must follow that plaintiff's rule sprung directly from the substance and “contains no original creative authorship.” This does not [necessarily] follow. Copyright attaches to form of expression, and defendant's own proof itself established that there was more than one way of expressing even this simple substance. Nor, in view of the almost precise similarity of the two rules, could defendant successfully invoke the principle of a stringent standard for showing infringement which some courts apply when the subject matter involved admits of little variation in form of expression. E.g., Dorsey v. Old Surety Life Ins. Co., 98 F.2d 872 (10th Cir. 1938) (“a showing of appropriation in the exact form or substantially so”); Continental Casualty Co. v. Beardsley, 253 F.2d 702 (2d Cir. 1958), cert. denied, 358 U.S. 816 (1959) (“a stiff standard for proof of infringement”).

Nonetheless, we must hold for the defendant. When the uncopyrightable subject matter is very narrow, so that the topic necessarily requires, if not only one form of expression, at best only a limited number, to permit copyrighting would mean that a party or parties, by copyrighting a mere handful of Forms, could exhaust all possibilities of future use of the substance. In such circumstances it does not seem accurate to say that any particular form of expression comes from the subject matter. However, it is necessary to say that the subject matter would be appropriated by permitting the copyrighting of its expression. We cannot recognize copyright as a game of chess in which the public can be checkmated. Cf. Baker v. Selden, supra.

Upon examination, the matters embraced in Rule 1 are so straightforward and simple that we find this limiting principle to be applicable. Furthermore, its operation need not await an attempt to copyright all possible forms. It cannot be only the last form of expression which is to be condemned, as completing defendant's exclusion from the substance. Rather, in these circumstances, we hold that copyright does not extend to the subject matter at all, and plaintiff cannot complain even if his particular expression was deliberately adopted.

Notes

1. The court gives three possible rules for cases where only a few forms of expression could be devised to express the idea. Legal rule 1, adopted by the district court, is that no such expression is original, because the idea dictates the expression. The Court rejected this rule because there was more than one way to express the substance of the contest rule. Legal rule 2 is that the scope of the copyright is very narrow, so that only verbatim or near-verbatim copying infringes. Legal rule 3 is that the subject matter is uncopyrightable (“copyright does not extend to the subject matter”).

Which rule makes the best sense? What is the practical difference, if any, among the rules?

2. Compare the First Circuit's Rule 3 with 17 U.S.C. § 102(b). Is there any difference? Review Judge Gray's discussion of this point in the NEC–Intel decision and the discussion in endnote 7 to Lotus–Borland. A similar difference in approach arose over whether one should analytically dissect out unprotectable material before or as a part of determining substantial similarity.

3. The First Circuit's decision is one of the earliest elaborations of the doctrine, stated in Baker, that when only a few ways exist to express a given idea, that idea and its expression necessarily merge, and copyright does not protect the idea.



Data East USA, Inc. v. Epyx, Inc.
United States Court of Appeals for the Ninth Circuit
862 F.2d 204 (9th Cir. 1988)

Trott, Circuit Judge.

Plaintiff-appellee Data East USA, Inc., brought this action against defendant-appellant Epyx, Inc. The district court found a copyright infringement and issued a permanent injunction and impoundment. Epyx appeals and we reverse.

Facts

Data East is engaged in the design, manufacture, and sale of audio-visual works embodied in video games for coin-operated and home–computer use. Data East applied for and received audio-visual copyright registration for the “Karate Champ” video games in suit. Epyx is engaged in the development and distribution of audio-visual works for use on home computers. It obtained a license agreement for, and commenced United States distribution, of a Commodore–compatible home–computer video game named “World Karate Championship.”

Each competing product, “Karate Champ” and “World Karate Championship,” consists of the audio-visual depiction of a karate match or matches conducted by two combatants, one clad in a typical white outfit and the other in red. Successive phases of combat are conducted against varying stationary background images depicting localities or geographic scenes. The match is supervised by a referee who directs the beginning and end of each phase of combat and announces the winning combatant of each phase by means of a cartoon–style speech balloon. Each game has a bonus round where the karate combatant breaks bricks and dodges objects. Similarities also exist in the moves used by the combatants and the scoring method.

Data East alleged that the overall appearance, compilation, and sequence of the audiovisual display of the video game “World Karate Championship” infringed its copyright for “Karate Champ” as embodied in the arcade and home versions of the video game.

The district court found that except for the graphic quality of Epyx's expressions, part of the scoreboard, the referee's physical appearance, and minor particulars in the “bonus phases,” Data East's and Epyx's games are qualitatively identical. The district court then held that Epyx's game infringes the copyright Data East has in “Karate Champ.” Based upon its decision, the district court permanently restrained and enjoined Epyx from copying, preparing derivative works, distributing, performing, or displaying the copyrighted work in the “Karate Champ” video game, the “World Karate Championship” game, or the “International Karate” game. A recall of all Commodore computer games of “World Karate Championship” and “International Karate” was ordered. This appeal followed.

Discussion

It is an axiom of copyright law that copyright protects only an author's expression of an idea, not the idea itself. 17 U.S.C. § 102(b). There is a strong public policy corollary to this axiom permitting all to use freely ideas contained in a copyrightable work, so long as the protected expression itself is not appropriated. Thus, to the extent the similarities between plaintiff's and defendant's works are confined to ideas and general concepts, these similarities are noninfringing.

The district court found that the idea expressed in plaintiff's game and in defendant's game is identical. The idea of the games was described by the court as follows: “... a martial arts karate combat game conducted between two combatants, and presided over by a referee, all of which are represented by visual images, and providing a method of scoring accomplished by full and half point scores for each player, and utilizing dots to depict full point scores and half point scores.” The district court further found that: “In each of the games, the phases of martial arts combat are conducted against still background images purporting to depict geographic or locality situses and located at the top of the screen as the game is viewed. The action of the combatants in each of the games takes place in the lower portion of the screen as the game is viewed, and is against a one color background in that portion of the screen as the game is viewed.”

The rule in the Ninth Circuit is that no substantial similarity of expression will be found when the idea and its expression are inseparable, given that protecting the expression in such circumstances would confer a monopoly of the idea upon the copyright owner. Nor can copyright protection be afforded to elements of expression that necessarily follow from an idea, or to “scènes à faire,” i.e., expressions that are “as a practical matter, indispensable or at least standard in the treatment of a given [idea].”

To determine whether similarities result from unprotectable expression, analytic dissection of similarities may be performed. (Analytic dissection of the dissimilarities as opposed to the similarities is not appropriate under this test because it distracts a reasonable observer from a comparison of the total concept and feel of the works.) If this demonstrates that all similarities in expression arise from use of common ideas, then no substantial similarity can be found.

The district court performed what can be described as an analytic dissection of similarities in its findings of fact and stated: Plaintiff's and defendant's games each encompass the idea of depicting the performance of karate martial arts combat in each of the following respects:

The district court found that the visual depiction of karate matches is subject to the constraints inherent in the sport of karate itself. The number of combatants, the stance employed by the combatants, established and recognized moves and motions regularly employed in the sport of karate, the regulation of the match by at least one referee or judge, and the manner of scoring by points and half points are among the constraints inherent in the sport of karate. Because of these constraints, karate is not susceptible of a wholly fanciful presentation. Furthermore, the use of the Commodore computer for a karate game intended for home consumption is subject to various constraints inherent in the use of that computer. Among the constraints are the use of sprites (a special technique for creating mobile graphic images on a computer screen that is appropriate for animation), and a somewhat limited access to color, together with limitations upon the use of multiple colors in one visual image.

The fifteen features listed by the court “encompass the idea of karate.” These features, which consist of the game procedure, common karate moves, the idea of background scenes, a time element, a referee, computer graphics, and bonus points, result from either constraints inherent in the sport of karate or computer restraints. After careful consideration and viewing of these features, we find that they necessarily follow from the idea of a martial arts karate combat game, or are inseparable from, indispensable to, or even standard treatment of the idea of the karate sport. As such, they are not protectable. “When idea and expression coincide, there will be protection against nothing other than identical copying.” A comparison of the works in this case demonstrates that identical copying is not an issue.

Accordingly, we hold that the court did not give the appropriate weight and import to its findings which support Epyx's argument that the similarities result from unprotectable expression. Consequently, it was clear error for the district court to determine that protectable substantial similarity existed based upon these facts.

The lower court erred by not limiting the scope of Data East's copyright protection to the author's contribution—the scoreboard and background scenes. In actuality, however, the backgrounds are quite dissimilar and the method of scorekeeping, though similar, is inconsequential. Based upon these two features, a discerning 17.5 year-old boy (the average purchaser of the game) could not regard the works as substantially similar. Accordingly, Data East's copyright was not infringed on this basis either.

Because we reverse in its entirety the district court's finding of copyright infringe-ment, it follows that the injunction was improvidently granted. Accordingly, we remand to the district court to lift the injunction.

Notes

1. In what ways does the Data East court's approach to ideas that can be expressed in only a limited number of ways differ from the Morrissey–P&G court's approach? What are the respective advantages of the approaches, from a tribunal's standpoint? What about a trial tactician's standpoint?

2. Is this just a video game problem?



NEC Corp. v. Intel Corp.
United States District Court
10 U.S.P.Q.2d 1177 (N.D. Cal. 1989)

Gray, D.J.

In this action, NEC seeks a declaration that lntel's copyrights on its 8086 and 8088 microcodes are invalid and/or are not infringed by NEC. Intel has filed a counterclaim for infringement of its copyrights on those microcodes. The issues to be determined and the decision that the court now renders on each are as follows:

...3. Do the microcodes that NEC produced for its V20, V30, V40 and V50 microprocessors infringe the Intel copyrights for its 8086 and 8088 microcodes? NEC's microcodes do not so infringe.

The Issue Of Substantial Similarity

In order to make a prima facie case of infringement, Intel must show substantial similarity between the copyrighted microcode and the accused microcode of NEC. In seeking to resolve the issue of substantial similarity, I have given careful consideration to the testimony and the conflicting conclusions of the two eminent experts, Dr. Patterson and Dr. [Gideon] Frieder [of the GW SEAS], and I have also taken into account my own impressions upon comparing the respective microcodes in light of the other testimony and the exhibits in the case. In pursuing this study, I have sought to adhere to the admonition that:

In deciding whether there is substantial similarity between the copyrighted work and the accused work, courts do not allow the accused pieces, and the pieces isolated, as if each stood alone. Where the accused work reflects an accumulation of similarities, the totality of the taking is to be considered: “When analyzing two works to determine whether they are substantially similar. Courts should be careful not to lose sight of the forest for the trees.” In programming infringement cases involving comprehensive nonliteral similarity The “trees” are the individual lines of code, and the “forest” is the detailed design.

Clapes, Lynch & Steinberg, Silicon Epics and Binary Bards: Determining The Proper Scope of Copyright Protection For Computer Programs, 34 UCLA L. Rev. 1493, 1570 (1987).

Having pondered all of these matters, it is my conclusion that the NEC microcode (Rev. 2 [Revision No. 2, last of NEC's Revisions 0, 1, and 2; Rev. 2 was NEC's commercial version]), when considered as a whole, is not substantially similar to the lntel microcode within the meaning of the copyright laws.

In the first place, none of the approximately 90 microroutines in the accused microcode are identical to Intel's copyrighted microcode. Some of the shorter ones are, indeed, substantially similar. But most of these involve simple, straightforward operations in which close similarity in approach is not surprising. On the other hand, others of the shorter microroutines of the NEC microcode are substantially different from the comparable Intel items.

Most of the approximately 40 NEC microroutines that Intel acknowledges not to be substantially similar are much longer than the accused NEC items and are quite different from the comparable Intel items in the manner in which the instructions are expressed.

As I have pondered upon the testimony of the experts and studied the exhibits, I have developed a sympathetic understanding of what Judge Learned Hand meant when he observed in a relevant situation that “the more the court is led into the intricacies of dramatic craftsmanship, the less likely it is to stand upon the firmer, if more naive ground of its considered impressions upon its own perusal.” Nichols v. Universal Pictures Corp., 45 F.2d 119 (2d Cir. 1930).

Also, Intel has proposed a finding and has cited valid authority to the effect that “[t]he test for infringement or substantial similarity is whether the work is recognized by an ordinary observer as having been taken from the copyrighted source.” For the reasons set forth above, this court concludes, based upon its own perusal, as well as upon the conflicting testimony of the experts, that the ordinary observer, considering the accused microcode as a whole would not recognize it as having been taken from the copyrighted source.

I believe that the foregoing conclusion comes close to resolving the issue of infringement. However, as pointed out, several of the shorter accused microroutines are substantially similar to Intel's corresponding items, and it is my obligation to “make a qualitative, not quantitative, judgment about the character of the work as a whole and the importance of the substantially similar portions of the work.” Some of these similar microroutines may be very important, and if they result from copying of protected expressions their use by NEC may be enjoined, irrespective of the general lack of similarity between the two microcodes. Accordingly, we shall discuss what the evidence indicates as to whether or not actionable copying is responsible for the similarities that do exist in some of the microroutines.

The Evidence Regarding Copying

In preparing this portion of this memorandum, I am assuming that it will be of particular interest only to Intel and NEC and their respective counsel, and therefore that anyone who reads it will be familiar with the facts. Accordingly, I shall refrain from the extremely arduous and lengthy task of describing or explaining the background circumstances involving the specific issues.

Intel urges several bases for its contention that Mr. Kaneko created NEC's microcode for its V20/V30 microprocessors by copying substantial portions of Intel's 8086/ 8088 microcode. These arguments are found not to be compelling.

Assessment Of Mr. Kaneko's Expertise.

Intel contends that the indications are that Mr. Kaneko must have copied because of his relative inexperience with microprogram, the arduous schedule imposed upon him within which to write the microprogram and the specifications for the hardware, and the fact that he made relatively few notes as compared to his work on other microprogram.

The record shows Mr. Kaneko to have been a very talented young man with an outstanding academic record that is highly relevant to microprocessors, and he previously had completed a substantial assembly language compiler program. Mr. Kaneko testified very creditably that he did not feel himself to have been under great pressure to complete his assignment and that he easily was able to accomplish it in two months, well within the time requested of him. The court also accepts Mr. Kaneko's testimony that the lack of notes stemmed from his conclusion that the task was relatively simple and that he was working alone, as compared with other assignments in which the participation of others made greater note taking more appropriate.

Mr. Kaneko testified in a straightforward manner and displayed considerable technical knowledge in explaining the decisions that he made in the creation of the V20/V30 microcode. He did not contend that he had not been influenced by his experience in previously having disassembled the 8086/88 microcode. Such experience inevitably became part of his expertise, and the acquired knowledge of how Mr. McKevitt created instructions to be executed by the 8086/88 microcode very well may have been a source of ideas that Mr. Kaneko utilized in preparing a microcode for the V20/V30. However, he testified creditably that he did not undertake to copy the 8086/88 microcode, and, as is noted herein, the other evidence received in the trial of this action by no means impels a contrary conclusion.

The “Patch”

The record clearly shows that Intel's designer, Mr. Stoll, was obliged to alter the Interrupt Sequence for the 8088 microcode by writing a “patch” to overcome a “bug” that was found in the 8088 microprocessor. It also was not disputed that Mr. Kaneko wrote a similar patch in creating the Interrupt Sequence of the microcode of the V20 microprocessor and that the V30 did not have either the bug or the patch.

Intel emphatically contends that the foregoing constitutes direct evidence of slavish copying. A reasonable inference is that Mr. Kaneko wrote the Interrupt Sequence into Rev. 0 to be in harmony with V20 hardware that had been sufficiently defined. The design of the V20, which was [lawfully, by license] based on Intel's 8088, contained the bug, thereby making the incorporation of a patch necessary.

Under these circumstances, and bearing in mind that NEC had the right [by license] to use the design of the 8088 microprocessor, “bug” and all, in creating its V20, there is nothing suspicious about Mr. Kaneko's having created a patch to be in harmony with the “bug” in the V20 without being concerned about whether the resulting Interrupt microsequence would operate in the V30.

It is true, as Intel points out, that Mr. Kaneko labeled Rev. 0 as being a microcode for the “8086/88 instruction set.” However, there is, of course, a distinction between instruction set and “microcode,” and the implication in the title that the microcode that Mr. Kaneko was writing would implement the instruction set for both the 8086 and 8088 microprocessors does not necessarily mean that a particular microsequence contained therein would operate on both microprocessor chips. The Rev. 0 was in initial tentative draft, and the title that heads such draft does not impair the above conclusion that the patch was created by Mr. Kaneko for the V20 microprocessor.

The RESET Microsequence

The RESET microsequence in Rev. 0 is, indeed, almost identical to its counterpart in the Intel microcode. Intel, through Dr. Patterson, points out that there are many alternative ways of writing this microsequence. It is noted, however, that substantially all of the alternative ways suggested by Dr. Patterson use the same registers as do Intel and Rev. 0, the only significant fact here being that both Intel and Rev. 0 use these registers in the same sequential order in making up the six lines of the microsequence. Intel asserts that this is a significant indication of copying.

It should be noted that Mr. Kaneko changed his RESET sequence substantially in writing Rev. 2, which is NEC's final version of the challenged microcode and thus the only one against which a claim of infringement may be directed. Dr. Patterson acknowledged that such changes removed this microsequence from his “substantially similar” list.

The DAA/DAS Microsequences

The Rev. 0 DAA and DAS microsequences were very similar to Intel's. However, Mr. Kaneko made several changes in those microsequences in moving from Rev. 0 to Rev. 2. Intel nonetheless argues that the similarities between Intel's DAA/DAS and those of Mr. Kaneko's Rev. 0 is further evidence of copying, and points out that Dr. Frieder acknowledged that such similarities disclose a “possibility” that Mr. Kaneko was making use of his disassembled code when he wrote Rev. 0, and that Rev. 0 “could have been copied.”

Of course, these microsequences in Rev. 0 could have been copied, and there is a real possibility that Mr. Kaneko made use of the disassembled code when he wrote Rev. 0. As is mentioned above in this memorandum, Mr. Kaneko did not deny having been influenced by his experience in having disassembled Intel's microcode. But I do not see how this substantially helps lntel's case. Dr. Frieder referred to DAA and DAS as being “complex and very difficult to understand.” Let us assume, for purposes of this discussion, that when Mr. Kaneko faced the task of writing these microsequences for Rev. 0 he sought to recall how Intel had handled this difficult problem, or even referred to this character string to make that determination. Let us assume also for purposes of this discussion that he directly copied what he had learned into Rev. 0.

If he had stopped there and caused Rev. 0 to be the final microcode, a difficult question would arise as to whether what he had taken from the disassembled 8086/88 constituted the technical “idea” for a solution or the “expression” thereof.

However, as is indicated above, Mr. Kaneko changed the subject microsequences substantially in writing Rev. 2, as Dr. Patterson has acknowledged. In See v. Durang, 711 F.2d 141, 142 (9th Cir. 1983), the court affirmed the granting of summary judgment for the defendant, the opinion of the court noting that the plaintiff had sought to obtain “early drafts of defendant's play on the theory that they might reflect copying from plaintiffs play that was disguised or deleted-in-later drafts. Copying deleted or so disguised to be unrecognizable is not copying.” See also Eden Toys, Inc. v. Marshall Field & Co., 675 F.2d 498, 501 (2d Cir. 1982)(“a defendant may legitimately avoid infringement by intentionally making sufficient changes in a work which would otherwise be regarded as substantially similar to that of the plaintiff”); Warner Brothers v. American Broadcasting Companies, 654 F. 2d 204, 211 (2d Cir. 1981)(same). With respect to the DAA/DAS microsequences in Rev. 2, which is the challenged microcode there remains no basis for a claim of copying or even of substantial similarity.

The XLAT Microsequence

Intel notes through Dr. Patterson, that Mr. Kaneko made a mistake in recording the XLAT portion of the disassembled code by writing “AH” instead of “AL,” and that he perpetuated the same mistake in writing Rev. 0. Intel submits that this is a further indication “that Mr. Kaneko made reference to his disassembled source code while he was writing the Rev. 1 microprogram.” For reasons discussed in the preceding subparagraph, such a conclusion, even if correct, is of little significance here.

Treatment Of “Illegal” Instructions

Intel suggests that Mr. McKevitt adopted a method of treating “illegal” instructions that was “strongly disfavored and contrary to industry practice” and that Mr. Kaneko adopted the same method in writing Rev. 0. Intel again suggests that this similarity of choices “is extremely unlikely to have occurred other than as a result of Mr. Kaneko's referring at the time to his disassembled source code.”

Once again, I am willing to assume that when Mr. Kaneko faced the problem of “illegal instructions,” he recalled or even reexamined the character string of the disassembled Intel microcode and concluded that he would follow Mr. McKevitt's lead. But Intel does not point to any expression in the final version of Mr. Kaneko's V-series microcode that copies Mr. McKevitt's treatment of “illegal” instructions.

The Constraints

Hardware, Architecture, and Specifications.

In seeking to show that there were many alternate ways in which Mr. Kaneko's various microprogram could have been written, and therefore that the substantial similarities to Intel's microcode that did exist stemmed from copying, Intel declined to take into consideration the constraints that limited Mr. Kaneko's choices. Intel contends that NEC could have created a microprocessor compatible with Intel's 8086/88 by using “different hardware, different architecture, different specifications, and a different microinstruction format.”

However, NEC had a license from Intel to duplicate the 8086/8088 microprocessor hardware to the extent comprehended by the Intel patents. Both Dr. Patterson and Dr. Frieder acknowledged that the use of such hardware limited substantially the choices available to Mr. Kaneko in creating the microcode for the V-series. Having granted to NEC a license to duplicate the hardware of its 8086/88 to the extent comprehended by the Intel patents, and having conceded at trial that NEC had a right to duplicate the hardware of the 8086/88 because it was not otherwise protected by Intel, Intel is in no position to challenge NEC's right to use the aspects of Intel's microcode that are mandated by such hardware.

The Storage Space

Intel also asserts that if NEC had utilized all of the storage area (ROM) available to it on its microprocessor, which would have been double the storage space that was available to Intel, any constraint imposed by size would have been removed and a different and better microcode would have resulted.

However, NEC elected to use part of the ROM space existing on the V-series, in order to accommodate microcode for additional macroinstructions on the same ROM. This also was a legitimate constraint. NEC was not obliged to avoid the similarity that other constraints imposed by creating a larger microcode.

The Clean Room.

The Clean Room microcode constitutes compelling evidence that the similarities between the NEC microcode and the Intel microcode resulted from constraints. The Clean Room microcode was governed by the same constraints of hardware, architecture, and specifications as applied to the NEC microcode, and copying clearly was not involved. Mr. McKevitt, who created the 8086 microcode for Intel, readily acknowledged that the microarchitecture of the 8086 microprocessor affected the manner in which he created his microcode, and that he would expect that another independently created microcode for the 8086 would have some similarities to his.

Accordingly, the similarities between the Clean Room microcode and the Intel microcode must be attributable largely to the above–mentioned constraints. But the similarities between the Clean Room microcode and Rev. 2 are at least as great as are the similarities between the latter and the Intel microcode. The strong likelihood follows that these similarities also resulted from the same constraints.

Idea v. Expression

As concluded above, overall, and particularly with respect to the longer microroutines, NEC's microcode is not substantially similar to Intel's; but some of the shorter simpler microroutines resemble Intel's. None, however, are identical. As to these shorter, simpler microroutines, if their underlying ideas are capable of only a limited range of expression. they “may be protected only against virtually identical copying.” Frybarger v. International Business Machines, Inc., 812 F.2d 525, 530 (9th Cir. 1987). See also Worth v. Selchow & Righter Co., 827 F.2d 572 (9th Cir. 1987).

In determining an idea's range of expression, constraints are relevant factors to consider. See Data East USA v. Epyx, Inc. The expression of NEC's microcode was constrained by the use of the microinstruction set and hardware of the 86/8088.

Accordingly, it is the conclusion of this court that the expression of the ideas underlying the shorter, simpler microroutines (including those identified earlier as substantially similar) may be protected only against virtually identical copying, that NEC properly used the underlying ideas, without virtually identically copying their limited expression.

Conclusion

For reasons hereinabove set forth, judgment will be entered holding that the microcodes that NEC produced for its V2O, V30, V40, and V50 microprocessors do not infringe the Intel copyrights for its 8O86 and 8088 microcodes.

Notes

1. Did NEC prepare the accused microcode in a “clean room”? What was the relevance of the clean room exercise?

2. The parties in this case stipulated that Intel “has not claimed in this action that [NEC's] use of such macroinstruction set as such [i.e., the 8086 instruction set] violates any rights of [Intel].” Why do you think Intel so stipulated? Would you second guess counsel on this point?

3. Why isn't NEC's mere desire to achieve compatibility with the Intel 8086/8088 chip a base commercial consideration that should be ignored in determining whether it was “necessary” for NEC to duplicate Intel's microcode routines? What would the Apple-Franklin court have said about the NEC-Intel case?

4. NEC had a license from Intel covering the 8086/8088 architecture. Suppose that the architecture of the chip is in the public domain, so that no license is needed to avoid liability for patent or copyright infringement, and that NEC had therefore purchased no license. Would the same result occur on the constraints issue?

5. This is only the second installment of this opinion. Material on reverse–engineering is still to come. This portion of the opinion raises an issue that recurs in other cases (especially reverse – engineering cases) — intermediate copying. This court said several times in the opinion that it did not matter whether Mr. Kaneko's Rev. 0, from which NEC's commercial Rev. 2 descended, infringed Intel's copyright. The court required Intel to point to infringing material in Rev. 2, the final version.

Is that a proper test? What about the doctrine of the fruit of the poisoned tree? Aren't the applicable policies the same? Why not a copy is a copy is a copy?

6. As a point of information, NEC's expert witness in this case was Dr. Gideon Frieder, former dean of The George Washington University's School of Engineering and Applied Science. His speciality is operating systems.



Discussion of Human Factors Analysis,
Functionality, and Standardization

Excerpt from R. Stern, Legal Protection of Screen Displays and
Other User Interfaces for Computers: A Problem in Balancing
Incentives for Creation Against Need for Free Access to the
Utilitarian
, 14 Colum.-VLA J. Law & the Arts 283 (1990)

In broad terms, the generally accepted techniques and approaches to screen design that should be considered part of the public domain usually involve the application of ‘human factors analysis’ to screen design. Human factors analysis is the study of human interaction with computers, which may be based on logic or empirical data.1

  1.     Human factors analysis attempts to apply scientific principles to the study of human/computer interactions in order to make it easier for humans to use computers. Often, so-called “user friendliness” is merely the purposeful or inadvertent application of the principles of human factor analysis. User friendliness may also correlate with aesthetic principles. By the same token, aesthetic principles may correlate with principles of human factors analysis. In short, these are often just different names for the same thing.

The literature on human factors analysis includes the following serial publications: Human Factors, Int'l J. Man-Machine Studies, Proc. Human Factors Soc., and SIGCHI Bull. (newsletter of Sp. Int. Group on Computer-Human Interaction). See also Proceedings CHI ‘86: Human Factors in Computing Systems (1986); Human Factors and Interactive Computer Systems (Y. Vassiliou ed. 1984); Proceedings: Human Factors in Computer Systems (1982). Much of this material, insofar as it concerns menus and other screen displays, is summarized in Galitz, supra note 25. Schneiderman, supra note 13, also extensively discusses screen displays.

It is known (and texts state) that material on a menu or similar screen display should be so ordered as to group together functionally similar commands. The menu should vertically order commands by their frequency of use (so that the user may more easily find the most frequently used commands at the top of the menu), and should otherwise facilitate users’ task flow. Titles should be centered at the tope of the screen; command-fields should be displayed at the bottom of the screen. These ideas should not be proprietary.

Listed below are some specific examples of these generally known techniques of screen design, which at the level of generality used here to describe them should be considered in the public domain:

The foregoing techniques are taught and prescribed in textbooks and other literature. Sometimes, their use has been supported by empirical data from experiments that compare use of two screen displays or other user interfaces, one embodying and one not embodying one of the prescribed techniques. Generally, employing these techniques in designing the user interface of a computer program makes it easier and faster for users to learn how to use the program, causes them to make fewer mistakes, increases their work output, and makes it easier for them to recall how to use the computer program after a period of nonuse.2 Those characteristics are equated in this Article with the terms “functional” and “utilitarian.”

  2.     Schneiderman, supra note 13, at v, 14-15. It is also said that employing the techniques increases user satisfaction with the program by lessening user anxiety and frustration attendant to use of computer programs. Id.; id. at 32.

The copyright law's concept of function and utility3 draw in the main from the Supreme Court's decision in Baker v. Selden, in which the Court distinguished books about useful arts and articles from the arts and articles themselves. The Court held that the wording of such a book was protected by the copyright in the book, but the arts and articles of use described in the book were not protected by the copyright in the book. Publication dedicated them to the public. Further, if the specific wording of the book was necessary to the use of the art that the book taught, publication of the book dedicated that, too, to the public.4 The Court emphasized that the protection of arts and articles of use was properly the province of patent law, not copyright law. Since Baker, function and utility have often been discussed in copyright law decisions, but the concepts have not been well defined.5

  3.     The terms “functional” and “utilitarian,” which are central to the analysis in this Article, are not defined by the Copyright Act. The Act defines a “useful article” as an article “having an intrinsic utilitarian function” that is more than just portrayal or communication of information, 17 U.S.C. § 101. But the Act leaves “intrinsic utilitarian function” (and the components of that phrase) undefined.

  4.     In that case, Selden's copyrighted book explaining his peculiar system of book-keeping” contained some ruled blank forms to be used in carrying out accounting methods that Selden taught. Baker's imitation of Selden's forms did not make him liable for copyright infringement, for Selden's forms had to be “considered as necessary incidents to the art, and given therewith to the public.” Id. at 103.

  5.     The function of a computer program is discussed in Apple Computer, Inc. v. Franklin Computer Corp., 714 F.2d 1240 (3d Cir. 1983), cert. dism'd by stip., 464 U.S. 1033 (1984), for purposes of a Baker analysis (whether particular lines of code were necessary for performing the function of the program). But the court's analysis of what was the function of a compiler program, for example, is entirely question-begging. The court defined the function of the compiler at such a highly abstract and general level (in effect, translating some source code into some object code, rather than a specified type of source code into a specified type of object code) that numerous alternatives existed and nothing was necessary to the function so defined. By way of analogy, if in emulation of the Apple court one considered the function of Selden's art to be doing some kind of bookkeeping, never mind what kind, Baker would not be excused from imitating Selden's forms, because Baker could have instead used some other kind of bookkeeping method.

See also Whelan Associates, Inc. v. Jaslow Dental Laboratory, Inc., 797 F.2d 1222, 1238 n.34 (3d Cir. 1986), cert. denied, 479 U.S. 1031 (1987) (idea of computer program was to run dental laboratory “in an efficient way,” not to do so “in a certain way”).

An extensive body of writing on functionality and utility exists in the field of unfair competition and trademark law as applied to the configuration or design of products (so-called “trade dress” law).6 This body of law usually equates the terms “functional” and “utilitarian” to one another and uses both of them to refer to product features that make a product superior in performance or cheaper to manufacture.7 The decisions distinguish between what they term de facto functionality and de jure functionality. Any product configuration may be said to have some functionality, in the sense that any bottle holds liquids, whether its shape is fanciful (as is that of a Haig & Haig “pinch bottle”) or conventional; this attribute is termed de facto functionality, and does not disable a product configuration from receiving trademark-type protection. When a product configuration is the “best or one of a few superior designs” for a particular function the configuration is said to be de jure functional, and the law will not protect the configuration against imitation, because that would hinder competition.8

  6.     See, e.g., New England Butt Co. v. ITC, 756 F.2d 874 (Fed. Cir. 1985); In re Morton-Norwich Prods., Inc., 671 F.2d 1332 (C.C.P.A. 1982).

  7.     See In re Morton-Norwich Prods., Inc., 671 F.2d at 1338-39.

  8.     Id. at 1339-41. Morton-Norwich's key test for de jure functionality, which is whether (or the extent to which) alternative designs or constructions are available for competitors to use to accomplish the de facto function of the product, in an efficient, satisfactory manner, is generally accepted in unfair competition and trademark law. See, e.g., Brandir Int'l, Inc. v. Cascade Pac. Lumber Co., 834 F.2d 1142, 1148 (2d Cir. 1987). The test is not without difficulty, however. When function is defined broadly many alternatives exist, and when function is defined narrowly few or no alternatives exist.

Patent law distinguishes between ornamental aspects of a product and functional or utilitarian aspects. The former may be protected by a design patent, while the latter may be protected only by an ordinary, or “utility,” patent. The patent law concept of functionality, as applied to design patents, appears to parallel the trademark law concept.

For the purposes of this Article, as already indicated, a feature or aspect of a computer program screen display or user interface is functional or utilitarian if the feature aids users of the computer program to perform tasks faster, more easily, with fewer errors, more cheaply, or otherwise more efficiently or effectively. This concept corresponds to de jure functionality in unfair competition and trademark law, which appears to be essentially the same as functionality in design patent law. Under this concept, a screen display technique prescribed in a textbook on human factors analysis would be functional. Other commentators have used the terms “functional” and “utilitarian” differently in the context of computer program-related copyrights. Thus, Professor Karjala's concept of functionality includes what trademark law calls de facto functionality.9 The point will be addressed again subsequently.

  9.     Letter from Prof. Dennis S. Karjala to Richard H. Stern (Sept. 9, 1989) (regarding the remarks of Prof. Dennis S. Karjala at the SOFTIC Symposium, Nov. 7, 1989, in Tokyo, Japan). Professor Karjala uses as an example the keystrokes that a user interface would use to invoke the “Print” function. He and I would both agree that the use of the keystroke <P> to designate “Print” in a user interface is functional. My reason is that <P> has mnemonic value suggesting the “P” of “Print,” and in addition many users are already accustomed to using <P> for “Print.” Professor Karjala would say, but I would not, that the keystroke pair <&$> is also functional for designating “Print” if that is what the user interface of a commercially available program uses. His reason is that <&$> causes a program to operate the printer in the given circumstances. That appears to be de facto functionality.

Finally, the relation of convention and de facto and de jure standards to functionality must be considered. Thus far, we have considered screen display or other user interface expedients that were functional (i.e., facilitated cheaper, faster, more efficient performance of tasks) for intrinsic reasons. These reasons related to human physiology (e.g., relative sensitivity of eye to different wavelengths), equipment limitations (e.g., monitor flicker), or culturally established mindsets (e.g., the < → > [right arrow keystroke] means “go to right”, red means danger). A user interface expedient may be functional, also, in the same efficiency-related sense, because although originally arbitrary it has acquired an associated or “secondary meaning” that it efficiently communicates.

1. Convention

A user interface technique may be functional because it embodies a conventional known to users. Conventions are functional and utilitarian, for they facilitate and speed communication of ideas. A conventional gesture, for example, may substitute for many words and more emphatically convey an idea. Arbitrary conventions occur in the traditional subject matter of copyright, such as plays, films, and comic strips, and are recognized as unprotected by copyright. For example, the use of a light-bulb in a balloon over a comic strip character's head to mean the dawning of an idea and the use of stars to mean a sensation of pain are economical means for communicating ideas and they are in the public domain. The same could be said of the use of “#$%&*” to indicate deleted expletives.

Conventional keystrokes illustrate the same point for user interfaces. Use of <F1> to provide a “Help” menu or chart is a widespread convention. Because <F1>is so commonly used for this purpose, it is easy for users to remember and does not burden their short-term memory. Use of <Return> to indicate the user's selection of an item located at the point in a menu where the cursor then rests is another common convention, and therefore easy for users to remember. Treating either of these keystroke assignments as proprietary would deprive computer programmers of as useful and established expedient. Preempting conventional features of screen displays and other user interfaces would impose additional creation costs on designers and impose additional learning costs on users.

2. De Jure Standards

De jure standards are formally, officially or institutionally agreed-upon conventions, as distinguished from those conventions that evolved by informal, unofficial means. Some professional organizations, such as the IEEE, devote a major part of their energy to formulation of and agreement on standards. In general, standardization benefits individual users, because it allows them to transfer their knowledge of one system to another system. Standardization also benefits corporate users, because it increases worker productivity, and decreases training time and expense, including expenses caused by errors. Finally, standardization benefits screen designers because it resolves certain design issues and, in effect, interdicts reinvention of the wheel.

3. De Facto Standards

The term de facto standard refers to a widely used convention in an industry, where no institution has prescribed use of the standard. Thus, <F1> is a de facto standard keystroke for invoking a “help” menu; </> is a de facto standard keystroke for invoking the command mode in spreadsheet programs.

More controversially, a very large portion of a user interface may be said to have become a de facto standard for a particular kind of computer program. The entire command sets of the Crosstalk communication program and the 1-2-3 spreadsheet were each said to have become de facto industry standards. In both instances, however, their unauthorized use in a competitive computer program was held to be copyright infringement.

For all practical purposes, to call something a de facto standard is merely to call it an accepted convention in very emphatic language, or to emphasize that the convention has not been officially blessed by some institution to give it de jure status.

What is good design practice, convention, de facto standard, or de jure standard is not static; it is a moving target. There is every reason to expect a progression by which the teachings of human factors analysis about screen displays and other user interfaces will move from good design practice status to convention status and eventually to the status of de jure standard, blessed by the IEEE or another appropriate standard-making institution, or even required by Department of Defense procurement regulations or other provisions of law. By the same token, some present standards are bound to be superseded eventually.

Different techniques or expedients are functional or utilitarian to different extents. Hence, the potential adverse public effects of preempting them by copyright also differs. To complicate the matter further, even the same technique may be functional to a different degree depending on the factual context. For example, highlighting the first one or two letters of a command or parameter, and then using the highlighted alphanumerics to represent the keystrokes for invoking those commands, is sometimes so highly utilitarian that it is necessary or indispensable. At other times, highlighting is just use of a convenient, albeit inessential, convention.

Accordingly, one cannot make excessively sweeping and universal pronouncements on the dangers vel non of preemption. Nevertheless, it seems safe to generalize that a decision-maker should always be wary of protecting what may be an emerging convention when the proponent of the protection simultaneously contends that the feature is expressive and that it is helpful or beneficial to users. At the very least, one should regard the two characteristics as being inversely related — the more functionality and utility, pro tanto the less expressiveness

. The foregoing considerations suggest that convention and standardization should be relevant to a copyright infringement analysis in a user interface controversy. Two judicial opinions have expressly addressed this point, reaching contrary results, and there are overtones on the point in several other decisions. In Synercom Technology, Inc. v. University Computing Co.[462 F. Supp. 1003 (N.D. Tex 1978)], the court analogized a set of data input formats for a computer program to the “H-pattern” gearshift interact for automobiles. The court pointed out that such standards, at least initially de facto, become established arbitrarily; they were not necessarily intrinsically superior to alternative means (e.g., the QWERTY typewriter keyboard user interface.) Once in place, the standards resolve issues, promote efficiency, and constrain the marketplace. They then become, for copyright purposes, a part of idea rather than expression, and a competitor does not infringe by using them.

In Lotus Development Corp. v. Paperback Software International,[740 F. Supp. 37 (D. Mass. 1990)] the defendant sought to rely on Synercom as a precedent supporting its adoption of the spreadsheet industry de facto standard 1-2-3 command structure. The court rejected both Synercom as a valid copyright precedent and the idea that de facto standardization could be a defense to a charge of copyright infringement. The Paperback court considered convention and standard to be essentially immaterial considerations. It also questioned the value of standardization, on the ground that a standard may turn out to be the equivalent of the QWERTY keyboard, that is, suboptimal. In its view, only de jure standards were legally material in determining whether use of copyrighted material was excusable. The Paperback opinion will be discussed in detail below, in the context of other recent screen-display and user-interface decisions; it would be premature, here, to go into detailed analysis of the court's opinion. For the present purpose, it is sufficient to observe that to deprecate standardization as a value, as the Paperback opinion does, is to condone or even prescribe making it harder and more exasperating for the public to use software, and therefore to slow the growth of the software market.12 That holds back, rather than promotes, software progress.

  12.     An official of Microsoft Corp., probably the leading software publisher in the United States, asserted that the user interface is an example of where the rule should be standardization, rather than differentiation. “That's the wrong place to differentiate your product. We in the computer business benefit from the more users who have ready access to the technology. The more confusing we make it for users, the slower the market is going to grow.” PC Week, Aug. 25, 1987 (quoting Mark Ursino).



Policy Issues Raised by Protection of User Interfaces
U.S. Cong./Off. Technol. Assessment, Finding a Balance:
Computer Software, Intellectual Property, and the
Challenge of Technological Change ch. 4 (1992)
http://www.wws.princeton.edu/cgi-bin/byteserv.prl/~ota/disk1/1992/9215/921506.PDF

The economic effects of protecting interfaces are difficult to evaluate, requiring a determination of the appropriate level of incentives and the role of standards and network externalities. An evaluation of the economic effects of intellectual property protection may also be complicated by the fact that there are different types of interfaces. The value of a standard, and the balance between the cost of designing the interface and cost of its implementation, may both depend on the type of interface.

Incentives

One policy position is that intellectual property protection is required in order to provide the proper incentives for the development of software. It is argued that protection of the program code alone is not sufficient to provide this incentive. Because there are different ways of writing a program with the same interface, it may be possible to reimplement the same interface without a finding of infringement. If the cost of reimplementation were small when compared to the original developer's investment in designing the interface, it would be relatively easy to appropriate this investment. Without more direct intellectual property protection for the external design, it is argued, there would be less incentive to develop new interfaces.

An important factor in evaluating whether external designs should be protected is therefore the relative cost of design and implementation. Supporters of intellectual property protection for external designs argue that the cost of implementation is becoming less significant. In Lotus the court said:

I credit the testimony of expert witnesses that the bulk of the creative work is in the conceptualization of a computer program and its user interface, rather than in its encoding.

Similar considerations are said to apply to other types of interfaces; during an intellectual property panel at the 1990 Personal Computing Forum, one participant said “the hard work in doing object-oriented technology is in the interface design. The implementation of an object is trite.” The relative cost of design and implementation is also an important factor in the decompilation debate discussed later in this chapter. It has been argued that decompilation can make it significantly less expensive for a competitor to reimplement an existing program.

The alterative view is that there is sufficient incentive to engage in the design of interfaces even without intellectual property protection. Those who argue for this position claim that reimplementation may be time-consuming and expensive, providing the original developer with significant lead time. Other factors may also provide a significant advantage to the original designer of the interface. Long-range planning of enhancements may favor the interface originator, for example.

Network Externalities

There is a question as to whether the effect of intellectual property laws on standardization should affect an evaluation of the appropriate level of protection for interfaces. Standards benefit users in a number of ways. For example, a greater variety of application programs will be developed if there is a standard operating system. Developers will be able to sell to a larger market and more easily recover their development costs. Another example of an advantage of standards is that consistency among user interfaces makes it easier for users to learn to use a new program. The benefits to users that result from the wider use of an interface are known as network externalities.” Moreover, users may benefit from competition among suppliers of standard product. For example, suppliers of compilers for standard languages compete on the basis of the cost of the compiler and the efficiency of the machine language code generated.

De facto standards evolve through the actions of the market. If there is a dominant firm the interface that it has developed is more likely to become the standard. Alternatively, a de facto standard can develop because of a “bandwagon” effect. If consumers are faced with a choice between different interfaces, network externalities make the more widely used product more attractive. Consumers value the network externalities, not just the intrinsic value of the interface.

Standards may also be negotiated using standards committees. Firms engage in voluntary standards-setting when they determine that they are better off with a part of a larger market than if they were to continue trying to establish their interface as a de facto standard. Consumers may be less willing to buy a proprietary product. For example, it is not clear whether a computer language available from a single vendor would be widely used. A developer might be unwilling to rely on a single supplier.

One view is that intellectual property protection may harm users by affecting standardization processes. It is argued that firms may not have the correct incentives to engage in voluntary standards-setting because intellectual property protection can increase a firm's vested interest in seeing the interface it has developed chosen as a standard, slowing the standardization process. This could harm users, until a standard is negotiated or one interface prevails in the marketplace.

Users could also be harmed if new programs are not “backwards compatible” and require users to learn a new interface to take advantage of new features or better performance. In addition, it has been argued that network externality effects can complicate the balancing of incentives for software development by resulting in “extra” revenues for firms that succeed in establishing their products as a de facto standard and making it more difficult for other firms to enter the market.

The other view is that the question of standards should be kept separate from the basic issue of the proper incentives for software development. Furthermore, it is argued that voluntary standards efforts are sufficient, and that there is a trend in the computer industry toward using more formal standardization and licensing processes. Consortia have formed in a number of areas, such as user-interface design and operating systems. There are a variety of voluntary standards committees that are developing standards for data-communications protocols, operating-system interfaces, and principles for user interface design.




Link to next section of chapter 4

Return to Table of Contents