The Common Criteria for Information Technology Security Evaluation (referred to as Common Criteria or CC ) is an international standard ( ISO / IEC 15408) for computer security certification. It is currently in version 3.1 revision 5.
37-616: Common Criteria for Information Technology Security Evaluation, version 3.1 Part 1 (called CC 3.1 or CC) defines the Security Target ( ST ) as an "implementation-dependent statement of security needs for a specific identified Target of Evaluation ( TOE )". In other words, the ST defines boundary and specifies the details of the TOE. In a product evaluation process according to the CC the ST document
74-423: A Common Criteria certification allows a vendor to restrict the analysis to certain security features and to make certain assumptions about the operating environment and the strength of threats faced by the product in that environment. Additionally, the CC recognizes a need to limit the scope of evaluation in order to provide cost-effective and useful security certifications, such that evaluated products are examined to
111-447: A case, the ST must fulfill the generic security requirements given in each of these PPs, and may define further requirements. Common Criteria Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements (SFRs and SARs, respectively) in a Security Target (ST), and may be taken from Protection Profiles (PPs). Vendors can then implement or make claims about
148-456: A level of detail specified by the assurance level or PP. Evaluations activities are therefore only performed to a certain depth, use of time, and resources and offer reasonable assurance for the intended environment. In the Microsoft case, the assumptions include A.PEER: "Any other systems with which the TOE communicates are assumed to be under the same management control and operate under
185-549: A list of certified products, including operating systems, access control systems, databases, and key management systems. Common Criteria evaluations are performed on computer security products and systems. The evaluation process also tries to establish the level of confidence that may be placed in the product's security features through quality assurance processes: So far, most PPs and most evaluated STs/certified products have been for IT components (e.g., firewalls, operating systems , smart cards). Common Criteria certification
222-573: A negative impact on mutual recognition . In Sept of 2012, the Common Criteria published a Vision Statement implementing to a large extent Chris Salter's thoughts from the previous year. Key elements of the Vision included: ISO 17025 ISO / IEC 17025 General requirements for the competence of testing and calibration laboratories is the main standard used by testing and calibration laboratories. In most countries, ISO/IEC 17025
259-485: A number 1 through 7 called Evaluation Assurance Level (EAL), indicating the depth and rigor of the security evaluation, usually in the form of supporting documentation and testing, that the product meets the SFRs. An ST contains some (but not very detailed) implementation-specific information that demonstrates how the product addresses the security requirements. It may refer to one or more Protection Profiles (PPs). In such
296-409: A source of debate to those used to the more prescriptive approach of other earlier standards such as TCSEC and FIPS 140 -2. Common Criteria certification cannot guarantee security, but it can ensure that claims about the security attributes of the evaluated product were independently verified. In other words, products evaluated against a Common Criteria standard exhibit a clear chain of evidence that
333-463: A transition period has not been fully determined. On July 2, 2014, a new CCRA was ratified per the goals outlined within the 2012 vision statement. Major changes to the Arrangement include: Common Criteria is very generic; it does not directly provide a list of product security requirements or features for specific (classes of) products: this follows the approach taken by ITSEC , but has been
370-501: Is more specific in requirements for competence and applies directly to those organizations that produce testing and calibration results and is based on more technical principles. Laboratories use ISO/IEC 17025 to implement a quality system aimed at improving their ability to consistently produce valid results. Material in the standard also forms the basis for accreditation from an accreditation body. There have been three releases; in 1999, 2005 and 2017. The most significant changes between
407-532: Is provided by the vendor of the product. An ST defines information assurance security and functional requirements for the given information system product, which is called the Target of Evaluation (TOE). An ST is a complete and rigorous description of a security problem in terms of TOE description, threats, assumptions, security objectives, security functional requirements (SFRs), security assurance requirements (SARs), and rationales. The SARs are typically given as
SECTION 10
#1732790383653444-579: Is sometimes specified for IT procurement. Other standards containing, e.g., interoperation, system management, user training, supplement CC and other product standards. Examples include the ISO/IEC 27002 and the German IT baseline protection . Details of cryptographic implementation within the TOE are outside the scope of the CC. Instead, national standards, like FIPS 140-2 , give the specifications for cryptographic modules, and various standards specify
481-453: Is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited. Originally known as ISO/IEC Guide 25, ISO/IEC 17025 was initially issued by ISO/IEC in 1999. There are many commonalities with the ISO 9000 standard, but ISO/IEC 17025
518-434: Is typically demonstrated to a National approval authority: Characteristics of these organizations were examined and presented at ICCC 10. As well as the Common Criteria standard, there is also a sub-treaty level Common Criteria MRA (Mutual Recognition Arrangement), whereby each party thereto recognizes evaluations against the Common Criteria standard done by other parties. Originally signed in 1998 by Canada, France, Germany,
555-493: The 1999 and 2005 release were a greater emphasis on the responsibilities of senior management, explicit requirements for continual improvement of the management system itself, and communication with the customer. The 2005 release also aligned more closely with the 2000 version of ISO 9001 with regards to implementing continuous improvement. The 2005 version of the standard comprises four elements: The 2017 version comprises eight elements: Some national systems (e.g. UKAS M10 in
592-728: The Common Criteria Evaluation and Validation Scheme (CCEVS). In the column executives from the security industry, researchers, and representatives from the National Information Assurance Partnership (NIAP) were interviewed. Objections outlined in the article include: In a 2006 research paper, computer specialist David A. Wheeler suggested that the Common Criteria process discriminates against free and open-source software (FOSS)-centric organizations and development models. Common Criteria assurance requirements tend to be inspired by
629-476: The ISO 17000 series (and unlike most ISO standards for management systems), assessment of the laboratory is normally carried out by the national organization responsible for accreditation . Laboratories are therefore "accredited" under ISO/IEC 17025, rather than "certified" or "registered" by a third party service as is the case with ISO 9000 quality standard. In short, accreditation differs from certification by adding
666-883: The Inter-American Accreditation Cooperation (IAAC). The first laboratory accreditation bodies to be established were National Association of Testing Authorities (NATA) in Australia (1947) and TeLaRC in New Zealand (1973). Most other bodies are based on the NATA/TELARC model include UKAS in the UK, FINAS in Finland and DANAK in Denmark to name a few. In the U.S. there are several, multidisciplinary accreditation bodies that serve
703-754: The SOGIS-MRA typically recognize higher EALs as well. Evaluations at EAL5 and above tend to involve the security requirements of the host nation's government. In September 2012, a majority of members of the CCRA produced a vision statement whereby mutual recognition of CC evaluated products will be lowered to EAL 2 (Including augmentation with flaw remediation). Further, this vision indicates a move away from assurance levels altogether and evaluations will be confined to conformance with Protection Profiles that have no stated assurance level. This will be achieved through technical working groups developing worldwide PPs, and as yet
740-548: The UK) were the forerunners of ISO/IEC 17025:1999 but could also be exceedingly prescriptive. ISO/IEC 17025 allows laboratories to carry out procedures in their own ways, but require the laboratory to justify using a particular method. In common with other ISO quality standards, ISO/IEC 17025 requires continual improvement. Additionally, the laboratory will be expected to keep abreast of scientific and technological advances in relevant areas. In common with other accreditation standards of
777-630: The United Kingdom and the United States, Australia and New Zealand joined 1999, followed by Finland, Greece, Israel, Italy, the Netherlands, Norway and Spain in 2000. The Arrangement has since been renamed Common Criteria Recognition Arrangement ( CCRA ) and membership continues to expand. Within the CCRA only evaluations up to EAL 2 are mutually recognized (Including augmentation with flaw remediation). The European countries within
SECTION 20
#1732790383653814-555: The absence of a permanently staffed organizational body that monitors compliance, and the idea that the trust in the Common Criteria IT-security certifications will be maintained across geopolitical boundaries. In 2017, the ROCA vulnerability was found in a list of Common Criteria certified smart card products. The vulnerability highlighted several shortcomings of Common Criteria certification scheme: Throughout
851-482: The certification body of the country in which the product was evaluated. The certified Microsoft Windows versions remain at EAL4+ without including the application of any Microsoft security vulnerability patches in their evaluated configuration. This shows both the limitation and strength of an evaluated configuration. In August 2007, Government Computing News (GCN) columnist William Jackson critically examined Common Criteria methodology and its US implementation by
888-558: The common use of general-purpose operating systems, the claimed security functions of the Windows products are evaluated. Thus they should only be considered secure in the assumed, specified circumstances, also known as the evaluated configuration . Whether you run Microsoft Windows in the precise evaluated configuration or not, you should apply Microsoft's security patches for the vulnerabilities in Windows as they continue to appear. If any of these security vulnerabilities are exploitable in
925-631: The concept of a third party (Accreditation Body (AB)) attesting to technical competence within a laboratory in addition to its adherence and operation under a documented quality system, specific to a Scope of Accreditation. In order for accreditation bodies to recognize each other's accreditations, the International Laboratory Accreditation Cooperation (ILAC) worked to establish methods of evaluating accreditation bodies against another ISO/CASCO standard (ISO/IEC Guide 58 - which became ISO/IEC 17011). Around
962-527: The cryptographic algorithms in use. More recently, PP authors are including cryptographic requirements for CC evaluations that would typically be covered by FIPS 140-2 evaluations, broadening the bounds of the CC through scheme-specific interpretations. Some national evaluation schemes are phasing out EAL-based evaluations and only accept products for evaluation that claim strict conformance with an approved PP. The United States currently only allows PP-based evaluations. CC originated out of three standards: CC
999-406: The laboratory community. These bodies accredit testing and calibration labs, reference material producers, PT providers, product certifiers, inspection bodies, forensic institutions and others to a multitude of standards and programs. These ILAC MRA signatory accreditation bodies carry identical acceptance across the globe. It does not matter which AB is utilized for accreditation. The MRA arrangement
1036-565: The lifetime of CC, it has not been universally adopted even by the creator nations, with, in particular, cryptographic approvals being handled separately, such as by the Canadian / US implementation of FIPS-140 , and the CESG Assisted Products Scheme (CAPS) in the UK. The UK has also produced a number of alternative schemes when the timescales, costs and overheads of mutual recognition have been found to be impeding
1073-426: The operation of the market: In early 2011, NSA/CSS published a paper by Chris Salter, which proposed a Protection Profile oriented approach towards evaluation. In this approach, communities of interest form around technology types which in turn develop protection profiles that define the evaluation methodology for the technology type. The objective is a more robust evaluation. There is some concern that this may have
1110-401: The process of specification, implementation, and evaluation has been conducted in a rigorous and standard manner. Various Microsoft Windows versions, including Windows Server 2003 and Windows XP , have been certified, but security patches to address security vulnerabilities are still getting published by Microsoft for these Windows systems. This is possible because the process of obtaining
1147-421: The product's evaluated configuration, the product's Common Criteria certification should be voluntarily withdrawn by the vendor. Alternatively, the vendor should re-evaluate the product to include the application of patches to fix the security vulnerabilities within the evaluated configuration. Failure by the vendor to take either of these steps would result in involuntary withdrawal of the product's certification by
Security Target - Misplaced Pages Continue
1184-615: The same security policy constraints. The TOE is applicable to networked or distributed environments only if the entire network operates under the same constraints and resides within a single management domain. There are no security requirements that address the need to trust external systems or the communications links to such systems." This assumption is contained in the Controlled Access Protection Profile (CAPP) to which their products adhere. Based on this and other assumptions, which may not be realistic for
1221-453: The security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard and repeatable manner at a level that is commensurate with the target environment for use. Common Criteria maintains
1258-399: The traditional waterfall software development methodology. In contrast, much FOSS software is produced using modern agile paradigms. Although some have argued that both paradigms do not align well, others have attempted to reconcile both paradigms. Political scientist Jan Kallberg raised concerns over the lack of control over the actual production of the products once they are certified,
1295-847: The world, regions such as the European Community , the Asia-Pacific, the Americas and others, established regional cooperations to manage the work needed for such mutual recognition. These regional bodies (all working within the ILAC umbrella) include European Accreditation Cooperation (EA), the Asia Pacific Laboratory Accreditation Cooperation (APLAC), Southern African Development Community Cooperation in Accreditation (SADCA) and
1332-715: Was designed with equal weight across all economies. ABs include: In Canada, there are two accreditation bodies: The accreditation of calibration laboratories is the shared responsibility of the Standards Council of Canada (SCC) Program for the Accreditation of Laboratories-Canada (PALCAN), and the National Research Council of Canada (NRC) Calibration Laboratory Assessment Service (CLAS). The CLAS program provides quality system and technical assessment services and certification of specific measurement capabilities of calibration laboratories in support of
1369-528: Was produced by unifying these pre-existing standards, predominantly so that companies selling computer products for the government market (mainly for Defence or Intelligence use) would only need to have them evaluated against one set of standards. The CC was developed by the governments of Canada, France, Germany, the Netherlands, the UK, and the U.S. All testing laboratories must comply with ISO/IEC 17025 , and certification bodies will normally be approved against ISO/IEC 17065. The compliance with ISO/IEC 17025
#652347