Subject Matter Eligible?: No
10. A computer-implemented student evaluation method, comprising the steps of: (a) administering a computer-implemented assessment to a student, wherein the computer-implemented assessment comprises a plurality of items;
(b) using said computer to select a first learning target from a learning map;
( c) using a computer to record or access the student’s response to items in the computer-implemented assessment in a storage unit, wherein the items relate to said first learning target, precursors, or postcursors of said first learning target;
( d) using said computer to determine, for the first learning target, a set of values, wherein the values are based on the student’s responses to the items and predetermined response effect values;
( e) using said computer to calculate a probability value that represents the probability that the student knows the first learning target, wherein the determined probability value is a function of, at the least, said set of determined values; and
( f) using said computer to identify precursors and postcursors of the first learning target and to modify said learning map to store postcursor and precursor relationship data determined by said computer for said first learning target, further comprising the step of, for each postcursor, determining the probability that the student knows the postcursor, further comprising the step of determining whether the student’s demonstrated knowledge state of the postcursors indicates that the student’s actual probability of knowing the learning target is greater than the determined probability value, and further comprising the step of increasing the probability value if the student’s demonstrated knowledge state of the postcursors indicates that the student’s actual probability of knowing the learning target is greater than the determined probability value.
The Federal Circuit more recently acknowledged that in cases involving software innovations, the inquiry as to whether the claims are directed to an abstract idea “often turns on whether the claims focus on ‘the specific asserted improvement in computer capabilities … or, instead, on a process that qualifies as an ‘abstract idea’ for which computers are invoked merely as a tool.”‘ Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303 (Fed. Cir. 2018) (quoting En.fish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36 (Fed. Cir. 2016)). In Finjan, the court held that claims directed to a behavior-based virus scan constituted an improvement in computer functionality over the “traditional, ‘code-matching’ virus scans.” Id. at 1304. Instead of looking for the presence of known viruses, “behaviorbased” scans analyze a downloadable’s code and determine whether it performs potentially dangerous or unwanted operations, thus, enabling more flexible and nuanced virus filtering. Id. The court determined that the claimed method employs a new kind of file and allows access to be tailored for different users, and allows the system to accumulate and use newly available, behavior-based information about potential threats. Id. at 1305. Based on these findings, the court determined that the claims are “directed to a non-abstract improvement in computer functionality, rather than the abstract idea of computer security writ large.” Id. “Here, the claims recite more than a mere result. Instead, they recite specific steps-generating a security profile that identifies suspicious code and linking it to a downloadable-that accomplish the desired result.” Id.
Appellant points to the modification of the learning map by the computer as the key aspect of the claimed improvement to a computerimplemented student evaluation method. Tr. 7: 1-13. As explained above, this modification amounts to updating hypothesized information on the learning map based on actual test data. The method of claim 10 is unlike the claimed invention in McRO, in which the animation software used new rules to automatically set a keyframe at the correct point to depict more realistic speech as compared to the prior art, in which animators had to subjectively identify a problematic sequence and fix it by adding an appropriate keyframe. McRO, 837 F.3d at 1307. Claim 10 does not recite rules that improve the student evaluation method by, for example, automating the creation of the learning map. Cf, McRO, 837 F.3d at 1314 (“It is the incorporation of the claimed rules, not the use of the computer, that ‘improved [the] existing technological process’ by allowing the automation of further tasks”) (quoting Alice, 134 S. Ct. at2358). Rather, claim 10 is directed to collecting information and analyzing this information by steps people go through in their minds and by mathematical algorithms without more. See FairWarning, 839 F.3d at 1094-95. Specifically, steps (a), (b), (c), and (d) amount to collection of data. These steps of administering an assessment, selecting a learning target, looking up responses to items related to the learning target, and looking up a set of values based on the student’s responses are within the “realm of abstract ideas” of a computer collecting data from external and internal sources. Step ( e ), which is directed to calculating a probability value, amounts to a mathematical algorithm – another category of abstract ideas. Finally, step (t) is directed to the abstract idea of using actual data as feedback to modify a hypothesis. In the field of student evaluation methods, the prior art method hypothesized relationships between learning targets, mapped these relationships, and assigned probabilities to these relationships. The claimed improvement to this method is to simply use a feedback loop during implementation of the student evaluation method to refine and update the hypothesis using actual data.
Such a step without more is abstract as an ancillary part of such collection and analysis. Id. Similar to the claims in Fair Warning, it is the incorporation of a computer to modify the learning map based on actual responses received from the student that purportedly “‘improve[s] [the] existing technological process’ by allowing the automation of further tasks.” FairWarning, 839 F.3d at 1095 (quoting Alice, 134 S. Ct. at 2358). In other words, we agree with the Examiner that claim 10 is directed to implementing an old practice of student evaluation testing methods based on dependency relationships between learning targets in computer environment, using the same hypotheses that humans in student evaluation testing have used prior to Appellant’s invention.
Just like in Flook4 and Bilski, limiting the abstract idea to the field of an adaptive learning engine does not make the concept patent eligible. The 4 Parker v. Flook, 437 U.S. 584 (1978). claims attempt to patent the use of the abstract idea of learning target dependency relationships in an adaptive learning engine environment. See Tr. 7: 1-9. In other words, the claims are directed to using a learning map devised by subject matter experts to assess a student and then revising the learning map based on actual responses received from the student. The revision could be simply updating an initial hypothesized inference value between two nodes on the learning map based on actual use of the learning map on a student evaluation. Id. at 10: 11-12: 5.
Unlike in Finjan or Enfzsh, appealed claim 10 does not recite an improvement in computer capabilities. Finjan, 879 F.3d at 1303; Enfzsh, 822 F.3d at 1335-36. Rather, the improvement is to the underlying assumptions in the learning map (an abstract idea), and the computer is invoked merely as a tool to test and update these assumptions. The use of the computer to modify the learning map using particularly claimed rules did not improve an existing technological process; instead, these rules merely use the computer a tool to automate conventional activity. See McRO, 837 F.3d at 1314. For these reasons, we agree with the Examiner that claim 10 is directed to an abstract idea.
… [I]f the claim language “provides only a result-oriented solution, with insufficient detail for how a computer accomplishes it,” then the claims do not contain an “inventive concept” under Alice step 2. Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1342 (Fed. Cir. 2017); see also Elec. Power Grp., 830 F.3d at 1354 (explaining that claims are directed to an abstract idea where they do not recite “any particular assertedly inventive technology for performing [conventional] functions”).
Claim 10 does not add significantly more to the abstract idea. Appellant contends that, as in DDR Holdings, the appealed claims here address “a technological challenge (dynamic student response evaluation and learning map modification) that is particular to student evaluation software.” Appeal Br. 16-17. Appellant asserts that “the claimed solution is necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of student evaluation software.” Id. at 17. We disagree with Appellant’s characterization of the claimed invention. Appellant’s claim is directed to a method of testing and refining assumptions made by a subject matter expert in a learning map. Spec. i-f 7 (“What is desired, therefore, is a system and method for expressing hypothesized learning target dependencies and for assessing whether the hypothesized learning target dependencies are accurate.”). This “problem” of testing hypothesized learning target dependencies and probabilities for accuracy is not rooted in computer technology; rather, it is a problem faced by educators generally in the field of education. Spec. i-f 3. Unlike DDR Holdings, claim 10 does not involve a technological solution. See DDR Holdings, 773 F.3d at 1259. Rather, the solution to the problem identified by Appellant is to simply examine the actual student responses and test data to verify the accuracy of the hypothesized relationships and probability values. The computer is simply the tool used to perform the testing and modification.
The computer used is a generic processor performing conventional functions of data gathering, comparing, analyzing, and updating. See Spec. i-fi-1 145-151; Fig. 15 (describing a conventional example computer system 1501, including a processor 1504 connected to a bus 1502, a memory 1506, a secondary memory 1508, and a communications interface 1524). Thus, claim 10 does not recite “any particular assertedly inventive technology for performing [conventional] functions.” Elec. Power Grp., 830 F.3d at 1354. Appellant contends that similar to the claims held to be patent eligible in Diamond v. Diehr, 450 U.S. 175 (1981), the method of claim 10 improves pre-existing technology. Appeal Br. 22. Appellant states that the recited steps “allow dynamic modification of learning maps and storage of postcursor and precursor relationship data, as well as modification of probability values to improve the student evaluation software both for the student being evaluated as well as other students that will be evaluated using the software thereafter.” Id. at 23. Claim 10 recites modifying a hypothesized learning map (e.g., a hypothesized inference value) through analysis of actual gathered data representing an actual inference value, which is a fundamental concept of a system using feedback to refine an initial value. This is the basis for the scientific method. 5 Thus, claim 10 does not present a new technical solution, and we perceive no “inventive concept” that transforms the abstract idea of collecting, analyzing, and updating data into a patent-eligible application of that abstract idea. Further, we are not convinced that the steps transform the abstract concept into a patentable invention simply because the Examiner determined that the last three steps of claim 10 were not practiced in the prior art. Notice of Allowance 2 (June 13, 2014). “Eligibility and novelty are separate inquiries.” Two-Way Media Ltd. v. Comcast Cable Comm ‘ns, LLC, 874 F.3d 1329, 1339–40 (Fed. Cir. 2017); see also Affinity Labs of Texas, LLC v.DIRECTV, LLC, 838 F.3d 1253, 1263 (Fed. Cir. 2016) (holding that “[e]ven assuming” that a particular claimed feature was novel does not “avoid the problem of abstractness.”). Even if claim 10 recites an unconventional ordered combination of steps for testing the accuracy of the hypothesized probability value of the learning target, the fact that those steps have not previously been employed in the art is not sufficient, standing alone, to confer patent eligibility upon claim 10 because the claimed steps improve the abstract idea of using adaptive learning to test the accuracy of a hypothesized learning map, and do not improve the computer’s performance. See Versata Develop. Grp., Inc. v. SAP Am., Inc., 793 F.3d 1306, 1335 (Fed. Cir. 2015) (claims improved abstract idea not a computer’s performance).