Dec 4, 2021 - Is infinite token approval safe?

Is has become common practice for DeFi protocols to use infinite token allowance approval to improve end users experience. From the user’s perspective it’s indeed very convenient and even appealing, once they grant a dapp (decentralized app) infinite allowance they will be able to interact with such dapp mostly using single transactions instead of having to perform a token spending approval transaction prior to every interaction with the dapp.

A few months ago I questioned a similar approach used by the DAI stablecoin while researching the EIP-2612 proposal which I replicate below:

Is DAI-style permit safe to use?

Differently from the EIP-2612 which defines a “value” for the allowance DAI’s approach appears to approve an unlimited allowance for the spender address.

Is it safe to permit a protocol to spend DAI on my behalf?

If not, to which use cases is DAI-style permit targeted to?

(link to the full question)

Eventhough I didn’t get a full answer to my question at the time, one user provided some insights in a comment:

It depends on the protocol. If the protocol is only a smart contract and you see the source code and trust that the contract is bug-free and will only transfer the token based on defined logic and transparent actions/conditions then no harm in doing it (but you can see there is too many “AND”s). – Majd TL

So I concluded that there are too many “ANDs” for trusting a protocol with unlimited token allowance approval. It’s more flexible, sure, but more riskier than using limited approval if any bug is found in the protocol. Nonetheless, nobody seemed to care, most protocols were doing it by default, without notice.

As a skeptical person myself I carried on never granting infinite allowance approval to dapps I use, and adopting a few strategies which I’ll comment later on in situations I needed more flexibility.

But then, after a few months, something happened that made me remind of this matter, the Badger DAO Protocol exploit…

US$ 120 million stolen

As reported by rekt, the Badger DAO Protocol exploit took place past December 2nd and the staggering amount of US$ 120 million were stolen from it.

How did this happen? Different from previous DeFi attacks we’ve seen in the past that took advantage of smart contract bugs and sophisticated strategies for manipulating protocols internal parameters this one was simple enough that even those unfamiliar with DeFi can follow easily:

A front-end attack. An unknown party inserted additional approvals to send users’ tokens to their own address. Starting from 00:00:23 UTC on 2.12.2021, the attacker used this stolen trust to fill their own wallet.

Simple as that. For several days Badger users were accessing the hacked UI and inadvertently approving mostly unlimited allowance to the attacker’s address. The attacker waited for the right time to make his/her move, silently watching hundreds of users approving his/her address. And then, the attacker decided the reward was large enough, made his move and stole 120 million dollars.

Rumours that the project’s Cloudflare account was compromised have been circulating. Still, it a flagrant wake up call, to remind us that even if the protocol smart contracts are audited, battle tested and considered reasonably safe, if you’re interacting with that protocol through a dapp you can still fall for a hack if the front-end has been compromised.

Strategies for protecting yourself

There are three main strategies to protect your assets in situations like this when interacting with dapps that don’t support EIP-2612, which I detail below:

1) Always use limited approval: This is the trivial strategy, never grant unlimited allowance, always use two transactions, the first one for approving the protocol a limited allowance and the second one for interacting with the protocol. Some dapps allow you to disable the default setting of unlimited allowance in their UI. For the ones that don’t you can edit the allowance approval value in your wallet (ex: MetaMask) before sending the transaction through.

2) Use a hot wallet: Another common strategy, in case you really need to allow unlimited allowance (for instance to reduce costs with transactions fees) you should use a hot wallet, i.e., a separate address that you will fund on demand. All funds held by this address will be subject to higher risk, but it will contain a smaller portion of your holdings, so it’s a limited risk. By the way, avoid using the same hot wallet for multiple dapps, otherwise you’ll be increasing your risk profile.

3) Deploy a proxy contract: This is a more sophisticated strategy which requires you to code a smart contract that will interact with a protocol on your behalf, even bypassing the front-end altogether. I’ve been using this approach to interact with DEXes. I have a non upgradable proxy smart contract in place to which I send transactions for swapping tokens. I grant this proxy unlimited allowance for a hot wallet of mine. When I send a swap transaction to the proxy it will first approve a limited allowance in the destination DEX, then perform the swap transaction, and finally transfer the tokens back to my hot wallet. This way I get the best of both worlds, I’m using single transactions for interacting with dapps, and my hot wallet is shielded from “allowance exploits”. But de advised that writing smart contracts is inherently risky, so this strategy doesn’t come easy as well.

An idea for improving front-end security

Before closing I would like to discuss an idea for improving front-end dapps security. These apps are insecure because, unlike (most) smart contracts and blockchain transactions, hosting is centralized. A few admins have control of the front-end app. If one of the admin accounts is hacked the front-end app could be tempered with without anyone noticing.

So we need to make sure we are interacting with a untampered front-end in the first place. And the solution to this has been around for a long time: signed apps. If we define a method for bundling front-end apps for having a DAO controlled address to sign this bundle we can greatly reduce the front-end attack surface. All users would then access this fronted app and have their wallets checking the app signature. If the calculated signature for the received front-end bundle doesn’t match the DAO’s controlled signing address a warning message would be shown and the user would be advised to not interact with the app.

There’s one catch though, for this idea to work we would need to manually register/bookmark all DAO’s signing addresses that we trust. Let’s just hope we don’t get it from a hacked front-end then 😅

Nov 20, 2021 - Prevent merge from a specific branch using Git Hooks

Git Hooks are a little known but incredibly flexible feature of Git. They allow for the execution of arbitrary snippets of code during the several stages of the source code development workflow, for instance: pre-commit, pre-rebase, pre-merge-commit, post-merge, among others.

I recently had to implement one for preventing developers from accidentally merging from a specific branch, let’s call it “Sandbox”, into feature branches of a project. At first I didn’t know that I was going to use a Git Hook, but after reading a bit about it seemed the right tool for the job, and the pre-merge-commit hook introduced in Git 2.24 fit like a glove to my needs. Here’s how it works:

This hook is invoked by git-merge, and can be bypassed with the --no-verify option. It takes no parameters, and is invoked after the merge has been carried out successfully and before obtaining the proposed commit log message to make a commit. Exiting with a non-zero status from this script causes the git merge command to abort before creating a commit.

So without further ado here’s the end result, which was based in this gist:

#!/bin/sh

# This git hook will prevent merging from specific branches

FORBIDDEN_BRANCH="Sandbox"

if [[ $GIT_REFLOG_ACTION == *merge* ]]; then
	if [[ $GIT_REFLOG_ACTION == *$FORBIDDEN_BRANCH* ]]; then
		echo
		echo \# STOP THE PRESSES!
		echo \#
		echo \# You are trying to merge from: \"$FORBIDDEN_BRANCH\"
		echo \# Surely you don\'t mean that?
		echo \#
		echo \# Run the following command now to discard your working tree changes:
		echo \#
		echo \# git reset --merge
		echo
		exit 1
	fi
fi

It’s a really simple bash script that confirms the merge action is being executed and checks if the name of the forbidden branch is contained in the command. If both conditions are met then the merge action is prevented from being carried out by exiting the script with a non zero return code.

One downside of Git hooks is that they live in the .git/hooks subdirectory of the Git directory which is not under source control, so they need to be manually distributed and installed in each developer’s local repository.

Nonetheless you can also use Git’s template directory feature to automate the distribution of the hook for newcomers, since it allows for the copy of files and directories to the Git directory when cloning a repository (git clone).


Further Reference

Apr 20, 2021 - Protecting against semantic attacks

The semantic URL attack is one of the most popular attack types aimed at web applications. It falls into the wider “broken access control” category and has been consistently listed amongst OWASP top 10 application’s security risks lists 1.

In it an attacker client manually adjusts the parameters of an HTTP request by maintaining the URL’s syntax but altering its semantic meaning. If the web application is not protected against this kind of attack then it’s only a matter of the attacker rightly guessing request parameters for potentially gaining access to sensitive information.

Let’s take a look at a simple example. Consider you’re an authenticated user accessing your registered profile in a web application through the following URL:

https://domain.com/account/profile?id=982354

By looking at this request URL we can easily spot the “id” parameter and make an educated guess that it most likely represents the internal identifier of the requesting user. From that assumption an attacker could then try forging accounts identifiers for accessing their profile information:

https://domain.com/account/profile?id=982355

If the web application isn’t properly implementing a protection against this type of attack through access control then its users data will be susceptible to leakage. The attacker could even make use of brute force for iterating a large number of “id” guesses and potentializing his outcome.

Two frequently adopted (but insufficient!) countermeasures for minimizing risks in this situation are:

  1. Use of non sequential IDs for identifying users
  2. Throttle users web requests to the application

The first one makes guessing valid users (or other resources) IDs much harder, and the second one prevents brute force attacks from going through by limiting the amount of requests individual users can make to the application. However, none of these measures solve the real problem, they’re only mitigating it! It will still be possible to access or modify thrid parties sensitive data by making the right guess for the request parameters.

So what’s the definitive solution to this problem? As we’ll see in the next section one strategy is for the web application to implement an access control module for verifying the requesting users permissions for every HTTP request he/she makes, without exception, for properly protecting against semantic attacks.

Filtering Requests

In essence a web application that’s susceptible to semantic URL attacks isn’t filtering HTTP requests as it should. Consider the generic web application diagram below:

surface

An authenticated HTTP request arrives at the endpoint and is routed for processing. Without filtering (“unsafe pipeline”) the request goes directly to the application UI / business logic for processing, accessing its storage, and returns unverified data to the caller. With filtering (“safe pipeline”) before the request is actually executed a verification is performed for making sure it’s authorized to execute in the first place.

The semantic URL attack filter will be responsible for decoding the request’s URL, its parameters, and performing necessary verifications on whether or not the requesting user is allowed to access the resources mapped by these parameters. A typical design includes an “access control” module that implements resource specific verification rules for querying the request caller’s permissions on the set of affected resources. These rules can be independent of each other in the case of non related components, but they can also be constructed as a combination of lower level rules for more elaborate resources. For successfully validating a web request the semantic URL attack filter must execute all pertinent access control rules based on the decoded web request.

As you can evaluate from the diagram the request filtering and access control logic are completely decoupled from the application presentation and use case layers. Request filtering will occur prior to the execution of use cases. This allows for an effective segregation of responsibilities, making each component’s logic more clear and concise.

But there’s a catch. Since security verification is performed externally to the application business logic, all application use cases should be scoped, i.e, internal commands and queries must be designed for reducing the request’s footprint to the minimum required for it to be successfully executed without compromising sensitive data, otherwise the whole request filtering procedure would be deemed useless.

Performance Considerations

The proposed design brings in a few performance considerations. Since access control logic is decoupled from use case logic requests will incur at least one additional database round-trip for fetching data required for performing security verification. In more complex cases in which the request is accessing several resources this could mean multiple additional database round-trips. To mitigate this performance drawback two techniques can be employed i) caching and ii) hierarchical security.

Caching of users resource permissions can be based on resources unique identifiers. An appropriate cache invalidation strategy should be adopted according to the application security requirements to prevent users from holding resource permissions that may already have been removed. A sliding cache expiration policy may be adequate for expiring cache entries for an authorized user only when said user becomes inactive, improving overall performance.

Hierarchical security comes into play for reducing the amount of resources whose access permissions need to be evaluated. The concept is simple, if an user holds access permissions to a “parent” resource then, since application use cases logic is scoped, we can expect this user to have at least the same level of access permissions on the resource’s “children” without really having to perform this verification.

In closing, it is important to emphasize that a key requirement of the presented protection strategy is that developers only implement scoped use cases. All developers should be aware of this requirement while coding. Hence, code review will be particularly important for not letting security vulnerabilities go through to the master branch of the codebase.


Sources

[1] OWASP. Top 10 Application Security Risks - 2017

[2] Wikipedia. Semantic URL attack