Aravinth Manivannan
b75e9143eb
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #102 |
||
---|---|---|
.github | ||
.sqlx | ||
config | ||
docs/third-party | ||
migrations | ||
scripts | ||
src | ||
static | ||
templates | ||
vendor/pow | ||
.dockerignore | ||
.eslintrc.js | ||
.gitignore | ||
.woodpecker.yml | ||
build.rs | ||
Cargo.lock | ||
Cargo.toml | ||
docker-compose.yml | ||
Dockerfile | ||
LICENSE.md | ||
Makefile | ||
package-lock.json | ||
package.json | ||
README.md | ||
renovate.json | ||
rustfmt.toml | ||
sailfish.yml | ||
sqlx-data.json | ||
tsconfig.json | ||
webpack.config.js | ||
yarn.lock |
Why
mCaptcha is a proof-of-work based CAPTCHA system. Its effectiveness depends on an accurate and time-relevant proof-of-work difficulty setting. If it is too high, it could end up DoS-ing the underlying service that it is supposed to protect and if it is too low, the protection offered will be ineffective.
In order to select the right difficulty level, mCaptcha admins would require knowledge about current performance benchmarks on a large variety of devices that are currently on the internet.
What
This program runs a mCaptcha benchmarks on user devices and collects fully anonymous(only device statics are stored) performance statics, that are transparently made available to everyone free of charge. mCaptcha admins are kindly requested to refer to the benchmarks published to fine-tune their CAPTCHA deployment.
What data do you collect?
TODO: run program, record and share actual network traffic logs
Funding
NLnet
2023 development is funded through the NGI0 Entrust Fund, via NLnet. Please see here for more details.