EEA EthTrust Certification is a claim by a security reviewer that the Tested Code is not vulnerable to a number of known attacks or failures to operate as expected, based on the reviewer's assessment against those specific requirements.
No amount of security review can guarantee that a smart contract is secure against **all possible** vulnerabilities, as explained in . However reviewing a smart contract according to the requirements in this specification provides assurance that it is not vulnerable to a known set of potential attacks. This assurance is backed not only by the reputation of the reviewer, but by the collective reputations of the multiple experts in security from many competing organizations, who collaborated within the EEA to ensure this specification defines protections against a real and significant set of known vulnerabilities.Example of a link to [M] Document Special Code Use.
Variables, introduced to be described further on in a statement or requirement, are formatted as var . Occasional explanatory notes, presented as follows, are not normative and do not specify formal requirements.Tested code MUST NOT make tests that ether balance is equal to (i.e. `==`) a specified amount or the value of a variable.Following the requirement is a brief explanation of the relevant vulnerability, and links to further discussion, in this case a Related Requirement, a relevant subsection of , a link to the "Smart Contract Weakness Classification Registry" [[swcregistry]] that includes test cases, and a link to the description of a related general vulnerability in the "Common Weakness Enumeration" [[CWE]].
Good Practices are formatted the same way as Requirements, with an apparent level of [GP]. However, as explained in meeting them is not necessary and does not in itself change conformance to this specification.
Overriding Requirements enable simpler testing for common simple cases.
For more complex Tested Code, that uses features which need to be handled
with extra care to avoid introducing vulnerabilities, they ensure such usage is
appropriately checked.
In a typical case of an Overriding Requirement for a Security Level [S] requirement,
they apply in relatively unusual cases or where automated systems are generally unable
to verify that Tested Code meets the requirement. Further verification
of the applicable Overriding Requirement(s) can determine that the
Tested Code is using a feature appropriately, and therefore passes the
Security Level [S] requirement.
If there is not an Overriding Requirement for a requirement
that the Tested code does not meet, the Tested code
is not eligible for EEA EthTrust Certification. However, even for such cases,
note the Recommended Good Practice
[**[GP] Meet as many requirements as possible**](#req-R-meet-all-possible); meeting
any requirements in this specification will improve the security of smart contracts.
In the following requirement:
- the Security Level is "**[S]**",
- the name is "**No `tx.origin`**", and
- the Overriding Requirement is "[Q] Verify tx.origin
usage".
The requirement that the tested code does not contain a `tx.origin` instruction
is automatically verifiable.
Tested Code that does have a valid use for tx.origin
,
as decided by the auditor, and meets the Security Level [Q] Overriding Requirement
[Q] Verify tx.origin
usage
conforms to this Security Level [S] requirement.
In principle anyone can submit a smart contract for verification. However submitters need to be aware of any restrictions on usage arising from copyright conditions or the like. In addition, meeting certain requirements can be more difficult to demonstrate in a situation of limited control over the development of the smart contract.
The Working Group expects its own members, who wrote the specification, to behave to a high standard of integrity and to know the specification well, and notes that there are many others who also do so. The Working Group or EEA MAY seek to develop an auditor certification program for subsequent versions of the EEA EthTrust Security Levels Specification.A common feature of Ethereum networks is the use of Oracles-- functions that can be called to provide information from outside the calling contract, sourced from onchain or offchain data. Oracles solve many onchain problems, from providing random number generation to asset data, but can also provide weather, sports, or other special-interest information. Oracles are used heavily in DeFi and gaming, where asset data and randomization are central to protocol design. This specification contains requirements to check that smart contracts are sufficiently robust to deal appropriately with whatever information is returned, including the possibility of malformed data, that can, in the event of oracle-specific attacks, be deliberately crafted with malicious intent. However, while some aspects of Oracles are within the scope of this specification, it is still possible that an Oracle provides misinformation or even actively produces harmful disinformation.
The two key considerations are safe use of oracles and their data, and the risk of oracle failure.While many high-quality and trusted Oracles are available, it is still possible to suffer an attack even with legitimate data. When calling on an Oracle, data received needs to be be checked for staleness to avoid frontrunning attacks. Even in non-DeFi scenarios, such as a source of randomness, it is often important to reset the data source for each transaction, to avoid arbitrage on the next transaction. The main advantage of using a time-weighted average price TWAP Oracle is its robustness to manipulated asset prices, creating a tradeoff of staleness for security. Choose your time window carefully, as when a time window is too wide, it won't reflect volatile asset prices, leaking opportunities to arbitrageurs. However, the spot price of an asset is almost never a good data point, as it is the single most volatile and manipulable piece of asset data one might ask of an Oracle--spot price will always be stale, anyhow. Instead, choose on-chain or offchain Oracles that collate a wide variety of source data, throw out outliers, and are well-regarded by the community. If an Oracle is off-chain, find out whether this reflects additionally stale on-chain data, or reliable and accurate data that is truly offchain. Specific to DeFi, two common Oracles vulnerabilities have surfaced repeatedly, losing millions of dollars of value on various DeFi protocols. Finally, even with a reasonable TWAP Oracle, a liquidity pool or other DeFi structure can be manipulated, especially by taking advantage of flashloans and flashswaps to cheaply raise funds, if the total value has insufficient liquidity. This can render it vulnerable to large price swings by an attacker holding only a small amount of liquidity with which to attack.
The second important consideration when using Oracles is that of a graceful failure scenario. What happens if your Oracle no longer returns data, or suddenly returns an unlikely value? At least one protocol has suffered losses due to 'hanging' on a minimum value in the rare event of a price crash rather than truly dropping to zero (and thus being adversely traded against by traders who accumulated large ammounts of a near zero-priced asset to sell back to the protocol), so be wary of hardcoding a minimum or maximum. However, in the event that an Oracle is broken, be sure to include a fallback--either a second (or third) Oracle or some reliable failure behavior, such as a descriptive error message or transaction revert.
Some requirements in the document refer to Malleable Signatures. These are signatures created according to a scheme constructed so that, given a message and a signature, it is possible to efficiently compute the signature of a different message - usually one that has been transformed in specific ways. While there are valuable use cases that such signature schemes allow, if not used carefully they can lead to vulnerabilities, which is why this specification seeks to constrain their use appropriately. In a similar vein, hash collisions could occur for hashed messages where the input used is malleable, allowing the same signature to be used for two distinct messages.
Other requirements in the document are related to exploits which take advantage of ambiguity in the input used to created the signed message. When a signed message does not include enough identifying information concerning where, when, and how many times it is intended to be used, the message signature could be used (or reused) in unintended functions, contracts, chains, or even at unintended time frames.
For more information on this topic, and the potential for exploitation, see also [[chase]].Gas Griefing is the deliberate abuse of the Gas mechanism that Ethereum uses to regulate the consumption of computing power, to cause an unexpected or adverse outcome much in the style of a Denial of Service attack. Because Ethereum is designed with the Gas mechanism as a regulating feature, it is insufficient to simply check that a transaction has enough Gas; checking for Gas Griefing needs to take into account the goals and business logic that the Tested Code implements.
Gas Siphoning is another abuse of the Gas mechanism that Ethereum uses to regulate the consumption of computing power, where attackers steal Gas from vulnerable contracts either to deny service or for their own gain (e.g. to mint Gas Tokens). Similar to Gas Griefing, checking for Gas Siphoning requires careful consideration of the goals and business logic that the Tested Code implements.
Gas Tokens use Gas when minted and free slightly less Gas when burned. Gas Tokens minted when Gas prices are low can be burned to subsidize Ethereum transactions when Gas prices are high. In addition, a common feature of Ethereum network upgrades is to change the Gas Price of specific operations. EEA EthTrust certification only applies for the EVM version(s) specified; it is not valid for other EVM versions. Thus it is important to recheck code to ensure its security properties remain the same across network upgrades, or take remedial action.
MEV, used in this document to mean "Maliciously Extracted Value", refers to the potential for block producers or other paticipants in a blockchain to extract value that is not intentionally given to them, in other words to steal it, by maliciously reordering transactions, as in Timing Attacks, or suppressing them.
The term MEV is commonly expanded as "Miner Extracted Value", and sometimes "Maximum Extractable Value". As in the example above, sometimes block miners can take best advantage of a vulnerability. But MEV can be exploited by other participants, for example duplicating most of a submitted transaction, but offering a higher fee so it is processed first.
Some MEV attacks can be prevented by careful consideration of the information that is included in a transaction, including the parameters required by a contract. Other strategies include the use of hash commitment schemes [[hash-commit]], batch execution, private transactions [[EEA-clients]], Layer 2 [[EEA-L2]], or an extension to establish the ordering of transactions before releasing sensitive information to all nodes participating in a blockchain. The Ethereum Foundation curates up to date information on MEV [[EF-MEV]].Censorship Attacks occur when a block processor actively suppresses a proposed transaction, for their own benefit.
Future Block Attacks are those where a block proposer knows they will produce a paticular block, and uses this information to craft the block to maliciously extract value from other transactions. See also [[futureblock]].
Timing Attacks are a class of MEV attacks where an adversary benefits from placing their or a victim's transactions earlier or later in a block. They include Front-Running, Back-Running, and Sandwich Attacks.
Front-Running is based on the fact that transactions are visible to the participants in the network before they are added to a block. This allows a malicious participant to submit an alternative transaction, frustrating the aim of the original transaction.
Back-Running is similar to Front-Running, except the attacker places their transactions after the one they are attacking.
In Sandwich Attacks, an attacker places a victim's transaction undesirably between two other transactions.
initialize()
function
is called in a subsequent transaction to the contract deployment. This scenario is ripe for frontrunning
attacks, and can result in protocol takeover by malicious parties, and theft or loss of funds. Any
initializable contract should be initialized in the same transaction as the deployment.
Moreover, developers should consider carefully the deployment implications of assigning access roles
to msg.sender
or other variables in constructors and initializers.
This is discussed further in requirements.
Several libraries and tools exist specifically for safe proxy usage and safe contract deployment.
From commandline tools to libraries to sophisticated UI-based deployment tools, many solutions exist to
prevent unsafe proxy deployments and upgrades.
Consider using access control in a given contract's initializer, and limiting the number of times an initializer
can be called on or after deployment, to enhance safety and transparency for the protocol itself and its users.
Furthermore, creating a _disableInitializers()
type function on a logic contract can prevent any
future initializer calls after deployment, preventing later attacks or accidents.
Although this specification does not require that Tested Code has been deployed, some requirements
are more easily tested when code has been deployed to a blockchain, or can only be thoroughly tested "in situ".
EEA EthTrust Certification is available at three Security Levels. The Security Levels describe minimum requirements for certifications at each Security Level: [S], [M], and [Q]. These Security Levels provide successively stronger assurance that a smart contract does not have specific security vulnerabilities.
- [Security Level [S]](#sec-levels-one) is designed so that for most cases, where common features of Solidity are used following well-known patterns, Tested Code can be certified by an automated "static analysis" tool. - [Security Level [M]](#sec-levels-two) mandates a stricter static analysis. It includes requirements where a human auditor is expected to determine whether use of a feature is necessary, or whether a claim about the security properties of code is justified. - [Security Level [Q]](#sec-levels-three) provides analysis of the business logic the Tested Code implements, and that the code not only does not exhibit known security vulnerabilities, but also correctly implements what it claims to do. The optional [Recommended Good Practices](#sec-good-practice-recommendations), correctly implemented, further enhance the Security of smart contracts. However it is not necessary to test them to conform to this specification. The vulnerabilities addressed by this specification come from a number of sources, including Solidity Security Alerts [[solidity-alerts]], the Smart Contract Weakness Classification [[swcregistry]], TMIO Best practices [[tmio-bp]], various sources of Security Advisory Notices, discussions in the Ethereum community and academics presenting newly discovered vulnerabilities, and the extensive practical experience of participants in the Working Group.EEA EthTrust Certification at Security Level [S] is intended to allow an unguided automated tool to analyze most contracts' bytecode and source code, and determine whether they meet the requirements. For some situations that are difficut to verify automatically, usually only likely to arise in a small minority of contracts, there are higher-level Overriding Requirements that can be fulfilled to meet a requirement for this Security Level.
To be eligible for EEA EthTrust Certification for Security Level [S], Tested code MUST fulfil all Security Level [S] requirements, unless it meets the applicable Overriding Requirement(s) for any Security Level [S] requirement it does not meet.[S] Encode hashes with chainid
Tested code MUST create hashes for transactions that incorporate chainid
values
following the recommendation described in [[!EIP-155]]
[S] No CREATE2
Tested code MUST NOT contain a CREATE2
instruction.
unless it meets the Set of Overriding Requirements
CREATE2
opcode provides the ability to interact with addresses
that do not exist yet on-chain but could possibly eventually contain code.
While this can be useful for deployments and counterfactual interactions with contracts,
it can allow external calls to code that is not yet known, and could turn out to be
malicous or insecure due to errors or weak protections.
[S] No tx.origin
Tested code MUST NOT contain a tx.origin
instruction
unless it meets the Overriding Requirement
[Q] Verify tx.origin
usage
tx.origin
is a global variable in Solidity which returns the address
of the account that sent the transaction. A contract using tx.origin
can allow an authorized account to call into a malicious contract,
enabling the malicious contract to pass authorization checks in unintended cases. Use
msg.sender
for authorization instead of tx.origin
.
[S] No Exact Balance Check
Tested code MUST NOT test that the balance of an account is exactly equal to
(i.e. ==
) a specified amount or the value of a variable.
[S] No Conflicting Inheritance
Tested code MUST NOT include more than one variable, or operative function with
different code, with the same name
unless it meets the
Overriding Requirement:
[M] Document Name Conflicts.
[S] No Hashing Consecutive Variable Length Arguments
Tested Code MUST NOT use abi.encodePacked()
with consecutive variable length arguments.
abi.encodePacked()
are packed in order prior to hashing.
Hash collisions are possible by rearranging the elements between consecutive,
variable length arguments while maintaining that their concatenated order is the same.
[S] No selfdestruct()
Tested code MUST NOT contain the selfdestruct()
instruction
or its now-deprecated alias suicide()
unless it meets the Set of Overriding Requirements
[S] No assembly {}
Tested Code MUST NOT contain the assembly {}
instruction
unless it meets the Set of Overriding Requirements
[S] No Unicode Direction Control Characters
Tested code MUST NOT contain any of the Unicode Direction Control Characters
U+2066
, U+2067
, U+2068
, U+2029
,
U+202A
, U+202B
, U+202C
, U+202D
,
or U+202E
unless it meets the Overriding Requirement
[M] No Unnecessary Unicode Controls.
See also the Related Requirements: [M] Protect External Calls, [M] Handle External Call Returns, and [Q] Verify External Calls.
[S] Check External Calls Return
Tested Code that makes external calls using the Low-level Call Functions (i.e. call
,
delegatecall
, staticcall
, send
, and transfer
)
MUST check the returned value from each usage to determine whether the call failed.
call()
,
- delegatecall()
,
- staticcall()
,
- `send()`, and
- `transfer()`.
Calls using these functions behave differently. They return a boolean indicating whether the call completed successfully.
Not testing explicitly whether these calls fail could lead to unexpected behavior in the caller contract.
See also [SWC-104](https://swcregistry.io/docs/SWC-104) in [[swcregistry]],
error handling documentation in [[error-handling]], unchecked return value as described in [[CWE-252]],
and the Related Requirements:
[S] Use Check-Effects-Interaction,
[M] Handle External Call Returns,
[Q] Verify External Calls
[S] Use Check-Effects-Interaction
Tested code that makes external calls MUST use the
Checks-Effects-Interactions
pattern to protect against Re-entrancy Attacks
unless it meets the Set of Overriding Requirements
or it meets the Set of Overriding Requirements
[S] No delegatecall()
Tested Code MUST NOT contain the delegatecall()
instruction
unless it meets the Overriding Requirement:
[M] Protect External Calls.
or it meets the Set of Overriding Requirements
[S] No Overflow/Underflow
Tested code MUST NOT use a Solidity compiler version older than 0.8.0
unless it meets the Set of Overriding Requirements
[S] Compiler Bug SOL-2022-6
Tested code that ABI-encodes a tuple (including a `struct`, `return` value, or paramater list)
with the ABIEncoderV2, that includes a dynamic component and whose last element is a
calldata
static array of base type `uint` or `bytes32`
MUST NOT use a Solidity compiler version between 0.5.8 and 0.8.15 (inclusive).
[S] Compiler Bug SOL-2022-5 with .push()
Tested code that copies `bytes` arrays from calldata
or
memory
whose size is not a multiple of 32 bytes, and has an empty `.push()`
instruction that writes to the resulting array,
MUST NOT use a Solidity compiler version older than 0.8.15.
[S] Compiler Bug SOL-2022-3
Tested code that
MUST NOT use a Solidity compiler version between 0.6.9 and 0.8.13 (inclusive).
[S] Compiler Bug SOL-2022-2
Tested code with a nested array that
abi.encode()
, orMUST NOT use a Solidity compiler version between 0.6.9 and 0.8.13 (inclusive).
[S] Compiler Bug SOL-2022-1
Tested code that
and passes such literals to abi.encodeCall()
as the first parameter,
MUST NOT use Solidity compiler version 0.8.11 nor 0.8.12.
bytesNN
or Fixed-length Variable types,
that specify the length of the variable as a fixed number of bytes, following the pattern
- `bytes1`
- `bytes2`
- ...
- `bytes10`
- ...
- `bytes32`
Solidity compiler versions from 0.8.11 and 0.8.12 had a bug that meant literal parameters were incorrectly
encoded by `abi.encodeCall()` in certain circumstances.
See also the 16 March 2022
[security alert](https://blog.soliditylang.org/2022/03/16/encodecall-bug/).
[S] Compiler Bug SOL-2021-4 Tested Code that uses custom value types shorter than 32 bytes MUST NOT use Solidity compiler version 0.8.8.
Solidity compiler version 0.8.8 had a bug that assigned a full 32 bytes of storage to custom types that did not need it. This can be misused to enable reading arbitrary storage, as well as causing errors if the Tested Code contains code compiled using different Solidity compiler versions. See also the 29 September 2021 [security alert](https://blog.soliditylang.org/2021/09/29/user-defined-value-types-bug/)
[S] Compiler Bug SOL-2021-2
Tested code that uses abi.decode()
on byte arrays as `memory`,
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.4.16 and 0.8.3
(inclusive).
[S] Compiler Bug SOL-2021-1
Tested code that has 2 or more occurrences of an instruction
keccak(mem,length)
where
MUST NOT use the Optimizer with a Solidity compiler version older than 0.8.3.
[S] Compiler Bug SOL-2020-11-push
Tested code that copies an empty byte array to storage, and subsequently increases
the size of the array using push()
MUST NOT use a Solidity compiler version
older than 0.7.4.
[S] Compiler Bug SOL-2020-10
Tested code that copies an array of types shorter than 16 bytes to a longer array
MUST NOT use a Solidity compiler version older than 0.7.3.
[S] Compiler Bug SOL-2020-9
Tested code that defines Free Functions MUST NOT use Solidity compiler version 0.7.1.
executed in the context of a contract. They still have access to the variable this, can call other contracts, send them Ether and destroy the contract that called them, among other things. The main difference to functions defined inside a contract is that free functions do not have direct access to storage variables and functions not in their scope. https://docs.soliditylang.org/en/latest/contracts.html#functionsSolidity compiler version 0.7.1 did not correctly distinguish overlapping Free Function declarations, meaning that the wrong function could be called. See examples of a [passing contract](https://entethalliance.github.io/eta-registry/examples/SOL-2020-9-fail.sol) and a [failing contract](https://entethalliance.github.io/eta-registry/examples/SOL-2020-9-fail.sol) for this requirement.
[S] Compiler Bug SOL-2020-8
Tested code that calls internal library functions with calldata parameters
called via using for
MUST NOT use Solidity compiler version 0.6.9.
[S] Compiler Bug SOL-2020-6
Tested code that accesses an array slice using an expression for the starting index
that can evaluate to a value other than zero
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.6.0 and 0.6.7 (inclusive).
[S] Compiler Bug SOL-2020-7
Tested code that passes a string literal containing two consecutive backslash ("\")
characters to an encoding function or an external call
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.5.14 and 0.6.7 (inclusive).
[S] Compiler Bug SOL-2020-5
Tested code that defines a contract that does not include a constructor, but
has a base contract that defines a constructor not defined as `payable`
MUST NOT use a Solidity compiler version between 0.4.5 and 0.6.7 (inclusive),
unless it meets the Overriding Requirement
[M] Check Constructor Payment.
[S] Compiler Bug SOL-2020-4
Tested code that makes assignments to tuples that
MUST NOT use a Solidity compiler version older than 0.6.4.
[S] Compiler Bug SOL-2020-3
Tested code that declares arrays of size larger than 2^256-1 MUST NOT use a Solidity compiler version older than 0.6.5.
[S] Compiler Bug SOL-2020-1
Tested code that declares variables inside a `for` loop that contains a `break`
or `continue` statement MUST NOT use the Yul Optimizer with Solidity compiler version 0.6.0
nor a Solidity compiler version between 0.5.8 and 0.5.15 (inclusive).
[S] Use a modern Compiler
Tested code MUST NOT use a Solidity compiler version older than 0.6.0,
unless it meets all the following requirements from the
EEA EthTrust Security Levels Specification Version 1,
as if they were Overriding Requirements:
[S] No Ancient Compilers
Tested code MUST NOT use a Solidity compiler version older than 0.3.
EEA EthTrust Certification at Security Level [M] means that the Tested Code has been carefully reviewed by a human auditor or team, doing a "manual analysis", and important security issues have been addressed to their satisfaction.
This level includes a number of Overriding Requirements for cases when Tested Code does not meet a Security Level [S] requirement directly, because it uses an uncommon feature that introduces higher risk, or because in certain circumstsances testing that the requirement has been met requires human judgement. Passing the relevant Overriding Requirement tests that the feature has been implemented sufficiently well to satisfy the auditor that it does not expose the Tested Code to the known vulnerabilities identified in this Security Level.
[M] Pass Security Level [S]
To be eligible for EEA EthTrust certification at Security Level [M],
Tested code MUST meet the requirements for .
[M] Explicitly Disambiguate Evaluation Order
Tested code MUST NOT contain statements where variable evaluation order
can result in different outcomes
[M] No failing `assert()` statements
assert()
statements in Tested Code MUST NOT fail.
[M] No Unnecessary Unicode Controls
Tested code MUST NOT use Unicode direction control characters
unless they are necessary to render text appropriately,
and the resulting text does not mislead readers.
This is an Overriding Requirement for
[S] No Unicode Direction Control Characters.
Security Level [M] permits the use of Unicode direction control characters in text strings, subject to analysis of whether they are necessary.
[M] No Homoglyph-style Attack
Tested code MUST not use homoglyphs, Unicode control characters, combining characters, or characters from multiple
Unicode blocks if the impact is misleading.
[M] Protect External Calls
For Tested code that makes external calls:
unless it meets the Set of Overriding Requirements
This is an Overriding Requirement for [S] Use Check-Effects-Interaction.
EEA EthTrust Certification at Security Level [M] allows calling within a set of contracts that form part of the Tested Code. This ensures all contracts called are audited as a group at this Security Level. If a contract calls a well-known external contract that is not audited as part of the Tested Code, it is possible to certify conformance to this requirement through the Overriding Requirements, which allow the certifier to claim on their own judgement that the contracts called provide appropriate security. The extended requirements around documentation of the Tested Code that apply when claiming conformance through implementation of the Overriding Requirements in this case reflect the potential for very high risk if the external contracts are simply assumed by a reviewer to be secure because they have been widely used. Unless the Tested Code deploys contracts, and retrieves their address accurately for calling, it is necessary to check that the contracts are really deployed at the addresses assumed in the Tested Code. The same level of protection against Re-entrancy Attacks has to be provided to the Tested Code overall as for the Security Level [S] requirement.
[M] Avoid Read-only Re-entrancy Attacks
Tested Code that makes external calls MUST protect itself against Read-only Re-entrancy Attacks.
[M] Handle External Call Returns
Tested Code that makes external calls MUST reasonably handle possible errors.
[M] Document Special Code Use
Tested Code MUST document the need for each instance of:
selfdestruct()
or its deprecated alias suicide()
,assembly {}
,CREATE2
,block.number
or block.timestamp
,and MUST describe how the Tested Code protects against misuse or errors in these cases, and the documentation MUST be available to anyone who can call the Tested Code.
This is part of several Sets of Overriding Requirements, one for each of
See also the Related requirements:
[Q] Document Contract Logic,
[Q] Document System Architecture,
[Q] Implement as Documented,
[Q] Verify External Calls,
[M] Avoid Common assembly {}
Attack Vectors,
[M] Compiler Bug SOL-2022-5 in `assembly {}`,
[M] Compiler Bug SOL-2022-4,
[M] Compiler Bug SOL-2021-3, and
[M] Compiler Bug SOL-2019-2 in `assembly {}`.
[M] Ensure Proper Rounding Of Computations Affecting Value
Tested code MUST identify and protect against exploiting rounding errors:
[M] Protect Self-destruction
Tested code that contains the selfdestruct()
or suicide()
instructions MUST
unless it meets the Overriding Requirement[Q] Implement Access Control
This is an Overriding Requirement for
[S] No selfdestruct()
.
[M] Avoid Common assembly {}
Attack Vectors
Tested Code MUST NOT use the assembly {}
instruction to change a variable
unless the code cannot:
This is part of a Set of Overriding Requirements for
[S] No assembly {}
.
[M] Protect CREATE2
Calls
For Tested Code that uses the CREATE2
instruction,
any contract to be deployed using CREATE2
selfdestruct()
, delegatecall()
nor
callcode()
instructions, andunless it meets the Set of Overriding Requirements
This is part of a Set of Overriding Requirements for
[S] No CREATE2
.
CREATE2
opcode's ability to interact with addresses
whose code does not exist yet on-chain mandates protections to prevent external calls to
malicous or insecure contract code that is not yet known.
The Tested code needs to include any code that can be deployed using
CREATE2
to verify protections are in place and the code behaves
as the contract author claims. This includes ensuring opcodes that can change the
immutability or forward calls in the contracts deployed with CREATE2
,
such as selfdestruct()
, delegatecall()
and
callcode()
, are not present.
If any of these opcodes are present, the additional protections and documentation
required by the Overriding Requirements are necessary.
[M] No Overflow/Underflow
Tested code MUST NOT contain calculations that can overflow or underflow unless
This is an Overriding Requirement for [S] No Overflow/Underflow.
[M] Document Name Conflicts
Tested code MUST clearly document the order of inheritance for each function or variable that shares a name with another function or variable.
This is an Overriding Requirement for
[S] No Conflicting Inheritance.
[M] Sources of Randomness
Sources of randomness used in Tested Code MUST be
sufficiently resistant to prediction that their purpose is met.
[M] Don't misuse block data
Block numbers and timestamps used in Tested Code MUST not introduce vulnerabilities
to MEV or similar attacks.
[M] Proper Signature Verification
Tested Code MUST use proper signature verification to ensure authenticity of messages
that were signed off-chain, e.g. by using ecrecover()
.
Some smart contracts process messages that were signed off-chain to increase flexibility, while maintaining authenticity. Smart contracts performing their own signature verification need to ensure that they are correctly verifying message authenticity.
See also SWC-122 [[swcregistry]]. For code that does use `ecrecover()`, see the Related Requirements [S] Compiler Bug SOL-2017-3 and [M] Validateecrecover()
input
[M] No Improper Usage of Signatures for Replay Attack Protection
Tested Code using signatures to prevent replay attacks MUST ensure that signatures cannot be reused:
unless it meets the Overriding Requirement [Q] Intended Replay. Additionally, Tested Code MUST verify that multiple signatures cannot be created for the same message, as is the case with Malleable Signatures.
[M] Solidity Compiler Bug 2023-1
Tested code that contains a compound expression with side effects that uses `.selector`
MUST use the viaIR option with Solidity compiler versions between 0.6.2 and 0.8.20 inclusive.
[M] Compiler Bug SOL-2022-7
Tested code that uses a Solidity compiler version between 0.8.13 and 0.8.17 inclusive,
MUST NOT not have storage writes followed by conditional early terminations
from inline assembly functions containing return()
or stop()
instructions.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
[M] Compiler Bug SOL-2022-5 in `assembly {}`
Tested code that copies `bytes` arrays from calldata or memory whose size is not
a multiple of 32 bytes, and has an `assembly {}` instruction that reads that data
without explicitly matching the length that was copied,
MUST NOT use a Solidity compiler version older than 0.8.15.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
assembly {}
Attack Vectors,
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-4,
[M] Compiler Bug SOL-2021-3, and
[M] Compiler Bug SOL-2019-2 in `assembly {}`.
[M] Compiler Bug SOL-2022-4
Tested code that has at least two `assembly {}` instructions, such that one writes
to memory e.g. by storing a value in a variable, but does not access that memory again,
and code in a another `assembly {}` instruction refers to that memory,
MUST NOT use the yulOptimizer with Solidity compiler versions 0.8.13 or 0.8.14.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
assembly {}
Attack Vectors,
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-7,
[M] Compiler Bug SOL-2022-5 in `assembly {}`,
[M] Compiler Bug SOL-2021-3, and
[M] Compiler Bug SOL-2019-2 in `assembly {}`.
[M] Compiler Bug SOL-2021-3
Tested code that reads an `immutable` signed integer of a `type` shorter than
256 bits within an `assembly {}` instruction MUST NOT use a Solidity compiler version
between 0.6.5 and 0.8.8 (inclusive).
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
assembly {}
,
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-5 in `assembly {}`,
[M] Compiler Bug SOL-2022-4, and
[M] Compiler Bug SOL-2019-2 in `assembly {}`.
[M] Compiler Bug Check Constructor Payment
Tested code that allows payment to a constructor function that is
MUST NOT use a Solidity compiler version between 0.4.5 and 0.6.7 (inclusive).
This is an Overriding Requirement for
[S] Compiler Bug SOL-2020-5.
[M] Use a Modern Compiler
Tested code MUST NOT use a Solidity compiler version older than 0.6.0,
unless it meets all the following requirements from the
EEA EthTrust Security Levels Specification Version 1,
as if they were Overriding Requirements:
[Q] Pass Security Level [M]
To be eligible for EEA EthTrust certification at Security Level [Q],
Tested code MUST meet the requirements for .
[Q] Code Linting
Tested code
assert()
statements, and
[Q] Manage Gas Use Increases
Sufficient Gas MUST be available to work with data structures in the Tested Code
that grow over time, in accordance with descriptions provided for
[Q] Document Contract Logic.
[Q] Protect Gas Usage
Tested Code MUST protect against malicious actors stealing or wasting gas.
[Q] Protect against Oracle Manipulation
Tested Code MUST protect itself against relying on Oracles that are vulnerable to manipulation
to enable an MEV attack.
Some Oracles are known to be vulnerable to manipulation, for example because they derive the information they provide from information vulnerable to Read-only Re-entrancy Attacks, or manipulation of prices through the use of flashloans, among other well-known attacks.
It is important to check the mechanism used by an Oracle to generate the information it provides, and the potential exposure of Tested Code that relies on that Oracle to the effects of manipulating its inputs or code to enable attacks. See also the Related Requirements [Q] Protect against Front-running, and [Q] Protect against MEV Attacks.
[Q] Protect against Front-Running
Tested Code MUST NOT require information
in a form that can be used to enable a Front-Running attack.
[Q] Protect against MEV Attacks
Tested Code that is susceptible to MEV attacks MUST follow appropriate
design patterns to mitigate this risk.
MEV refers to the potential that a block producer can maliciously reorder or suppress transactions, or another participant in a blockchain can propose a transaction or take other action to gain a benefit that was not intended to be available to them.
This requirement entails a careful judgement by the auditor, of how the Tested Code is vulnerable to MEV attacks, and what mitigation strategies are appropriate. Some approaches are discussed further in . Many attack types need to be considered, including at least Censorship Attacks, Future Block Attacks, and Timing Attacks (Front-Running, Back-Running, and Sandwich Attacks). See also the Related Requirements [S] No Exact Balance Check, [M] Sources of Randomness, [M] Don't misuse block data, and [Q] Protect against Oracle Manipulation, and [Q] Protect against Front-running.
[Q] Protect Against Governance Takeovers
Tested Code which includes a governance system MUST protect against one external
entity taking control via exploit of the governance design.
[Q] Process All Inputs
Tested Code MUST validate inputs, and function correctly whether the input
is as designed or malformed.
[Q] State Changes Trigger Events
Tested code MUST emit a contract event for all transactions that cause state changes.
[Q] No Private Data
Tested code MUST NOT store Private Data on the blockchain
Private data is used in this specification to refer to information that is not intended to be generally available to the public. For example, an individual's home telephone number is generally private data, while a business' customer enquiries telephone number is generally not private data. Similarly, information identifying a person's account is normally private data, but there are circumstances where it is public data. In such cases, that public data can be recorded on-chain in conformance with this requirement.
PLEASE NOTE: In some cases regulation such as the [[GDPR]] imposes formal legal requirements on some private data. However, performing a test for this requirement results in an expert technical opinion on whether data that the auditor considers private is exposed. A statement about whether Tested Code meets this requirement does not represent any form of legal advice or opinion, attorney representation, or the like.
[Q] Intended Replay
If a signature within the Tested Code can be reused, the replay instance MUST be intended, documented,
and safe for re-use.
This is an Overriding Requirement for [M] No Improper Usage of Signatures for Replay Attack Protection.
Security Level [Q] conformance requires a detailed description of how the Tested Code is intended to behave. Alongside detailed testing requirements to check that it does behave as described wth regard to specific known vulnerabililies, it is important that the claims made for it are accurate. This requirement underpins a Good Practice, that it fulfils claims made for it outside audit-specific documentation.
The combination of these requirements helps ensure there is no malicious code, such as malicious "back doors" or "time bombs" hidden in the Tested Code. Since there are legitimate use cases for code that behaves as e.g. a time bomb, or "phones home", this combination helps ensure that testing focuses on real problems. The requirements in this section extend the coverage required to meet the Security Level [M] requirement [**[M] Document Special Code Use**](#req-2-documented). As with that requirement, there are multiple requirements at this level that require the documentation mandated in this subsection.
[Q] Document Contract Logic
A specification of the business logic that the Tested code functionality is intended
to implement MUST be available to anyone who can call the Tested Code.
[Q] Document System Architecture
Documentation of the system architecture for the Tested code MUST be provided that
conveys the overrall system design, privileged roles, security assumptions and intended usage.
[Q] Annotate Code with NatSpec
All public interfaces contained in the Tested code MUST be annotated with inline
comments according to the [[NatSpec]] format that explain the intent behind each function, parameter,
event, and return variable, along with developer notes for safe usage.
[Q] Implement as Documented
The Tested code MUST behave as described in the documentation provided for
[Q] Document Contract Logic, and
[Q] Document System Architecture.
[Q] Enforce Least Privilege
Tested code that enables privileged access MUST implement appropriate access control mechanisms that provide the least privilege necessary for those interactions,
based on the documentation provided for
[Q] Document Contract Logic.
This is an Overriding Requirement for
[S] No selfdestruct()
.
msg.sender
, as this may leave a simple factory deployment contract as the insifficent new admin of your protocol.
It is particularly important that appropriate access control applies to payments,
as noted in [SWC-105](https://swcregistry.io/docs/SWC-105),
but other actions such as overwriting data as described in
[SWC-124](https://swcregistry.io/docs/SWC-126), or changing specific access controls,
also need to be appropriately protected [[swcregistry]].
This requirement matches [[CWE-284]] Improper Access Control.
See also "[Access Restriction](https://fravoll.github.io/solidity-patterns/access_restriction.html)" in [[solidity-patterns]].
[Q] Access control permissions must be both revocable and transferable
If the Tested code makes uses of Access Control for privileged actions, it MUST implement a mechanism
to revoke and transfer those permissions.
[Q] No single Admin EOA for privileged actions
If the Tested code makes uses of Access Control for privilieged actions, it MUST ensure that all critical administrative tasks require multiple signatures to be executed,
unless there is a multisg admin that has greater privileges and can revoke permissions in case of a compromised or rogue EOA and reverse any adverse action the EOA has taken.
[Q] Verify External Calls
Tested Code that contains external calls
This is part of a Set of Overriding Requirement for [S] Use Check-Effects-Interaction, and for [M] Protect External Calls.
[Q] Verify tx.origin
Usage
For Tested Code that uses tx.origin
, each instance
This is an Overriding Requirement for
[S] No tx.origin
.
[GP] Check For and Address New Security Bugs
Check [[!solidity-bugs-json]] and other sources for bugs announced after 15 July 2022
and address them.
[GP] Meet As Many Requirements As Possible
The Tested Code SHOULD meet as many requirements of this specification as possible
at Security Levels above the Security Level for which it is certified.
[GP] Use Latest Compiler
The Tested Code SHOULD use the latest available stable Solidity compiler version.
[GP] Write clear, legible Solidity code
The Tested Code SHOULD be written for easy understanding.
[GP] Follow Accepted ERC Standards
The Tested Code SHOULD conform to finalized [[ERC]] standards when it is
reasonably capable of doing so for its use-case.
An ERC is a category of [[EIP]] (Ethereum Improvement Proposal) that defines application-level standards and conventions,
including smart contract standards such as token standards (EIP-20) and name registries (EIP-137).
[GP] Define a Software License
The Tested Code SHOULD define a software license, which is commonly open-source for Solidity code deployed to public networks.
A software license provides legal guidance on how contributors and users can interact with the code, including auditors and whitehats.
[GP] Disclose New Vulnerabilities Responsibly
Security vulnerabilities that are not addressed by this specification
SHOULD be brought to the attention of the Working Group
and others through responsible disclosure as described in
.
[GP] Use Fuzzing
Fuzzing SHOULD be used to probe Tested Code for errors.
Fuzzing is an automated software testing method that repeatedly activates a contract, using a variety of invalid, malformed, or unexpected inputs, to reveal defects and potential security vulnerabilities.
Fuzzing can take days or even weeks: it is better to be patient than to stop it prematurely. Fuzzing relies on a Corpus - A set of inputs for a fuzzing target. It is important to maintain the Corpus to maximise code coverage, and helpful to prune unnecessary or duplicate inputs for efficiency. Many tools and input mutation methods can help to build the Corpus for fuzzing. Good practice is to build on and leverage community resources where possible, always checking licensing restrictions. Another important part of fuzzing is the set of specification rules that is checked throughout the fuzzing processes. While Corpus is the set of inputs for fuzzing targets, the specification rules are business logic checks created specifically for fuzzing and are evaluated for each fuzzing input. For a meaningful and efficient fuzzing campaign, it is not enough to send a large amount of random input to the contracts. This additional set of rules around the contracts should be present, so it gets triggered if fuzzing finds an edge case. The process shouldn't rely on the checks and reverts already within the contracts and the compiler. As shown above, fuzzing rules and properties can be complex and may depend on specific contracts, functions, variables, their values before and/or after execution, and potentially many other things depending on the fuzzing technology and specification language of choice. If any vulnerabilities are discovered in the Solidity compiler version by Fuzzing please disclose them responsibly.
[GP] Select an appropriate threshold for multisig wallets
Multisignature requirements for privileged actions SHOULD have a sufficient number of signers, and NOT require "1 of N" nor all signatures.
[GP] Use TimeLock delays for sensitive operations
Sensitive operations that affect all or a majority of users SHOULD use [[TimeLock]] delays.
The following is a list of terms defined in this Specification.
chainid
,
- [S] Compiler Bug SOL-2022-6,
- [S] Use a modern Compiler,
- [M] Explicitly Disambiguate Evaluation Order,
- [M] Avoid Read-only Re-entrancy Attacks
- [M] Ensure Proper Rounding Of Computations Affecting Value,
- [M] Compiler Bug SOL-2022-7,
- [M] Solidity Compiler Bug 2023-1,
- [M] Use a modern Compiler,
- [Q] Protect Gas Usage,
- [Q] Protect against Oracle Manipulation,
- [Q] Protect Against Governance Takeovers,
- [Q] Intended Replay,
- [Q] Access control permissions must be both revocable and transferable,
- [Q] No single Admin EOA for privileged actions,
- [GP] Use Fuzzing As Part Of Testing.
- [GP] Select an appropiate threshold for multisig wallets,
- [GP] Use TimeLock delays for sensitive operations,
ecrecover()
input, and
- [M] Compiler Bug No Zero Ether Send.