Compare commits

...

123 commits

Author SHA1 Message Date
Aravinth Manivannan 41f36eb67f Merge pull request 'fix(deps): update rust crate clap to v4.5.7' (#51) from renovate/clap-4.x-lockfile into master
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
Reviewed-on: #51
2024-06-10 23:54:32 +05:30
Renovate Bot 5bcd0a76ad fix(deps): update rust crate clap to v4.5.7
Some checks failed
ci/woodpecker/pr/woodpecker Pipeline failed
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-10 15:01:04 +00:00
Aravinth Manivannan 8dccd63ef2 Merge pull request 'chore(deps): update dependency pyzmq to v26' (#48) from renovate/pyzmq-26.x into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #48
2024-06-08 13:27:35 +05:30
Renovate Bot dfa21dcc69 chore(deps): update dependency pyzmq to v26
Some checks failed
ci/woodpecker/pr/woodpecker Pipeline failed
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 21:30:25 +00:00
Aravinth Manivannan 73f0193ec5 Merge pull request 'chore(deps): update dependency gevent to v24' (#46) from renovate/gevent-24.x into master
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
Reviewed-on: #46
2024-06-08 02:46:28 +05:30
Aravinth Manivannan e51cfc748a Merge pull request 'fix(deps): update tonic monorepo to 0.11.0' (#43) from renovate/tonic-monorepo into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #43
2024-06-08 02:46:12 +05:30
Aravinth Manivannan dd7176d47c Merge branch 'master' into renovate/tonic-monorepo
All checks were successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-08 02:45:56 +05:30
Aravinth Manivannan 0e0db2772c Merge pull request 'fix(deps): update rust crate derive_builder to 0.20.0' (#38) from renovate/derive_builder-0.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #38
2024-06-08 02:45:49 +05:30
Aravinth Manivannan cdc23c4013 Merge pull request 'chore(deps): update dependency bulma to v1' (#44) from renovate/bulma-1.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #44
2024-06-08 02:45:23 +05:30
Aravinth Manivannan 8373e417c1 Merge pull request 'fix(deps): update rust crate uuid to v1.8.0' (#42) from renovate/uuid-1.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #42
2024-06-08 02:45:15 +05:30
Aravinth Manivannan df3e557319 Merge pull request 'chore(deps): update dependency locust to v2.29.0' (#31) from renovate/locust-2.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #31
2024-06-08 02:45:10 +05:30
Aravinth Manivannan 8745761634 Merge pull request 'chore(deps): update dependency certifi to v2024' (#45) from renovate/certifi-2024.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #45
2024-06-08 02:45:04 +05:30
Renovate Bot c04e14f2a0 chore(deps): update dependency gevent to v24
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 19:31:03 +00:00
Renovate Bot e4add5ed97 chore(deps): update dependency certifi to v2024
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 19:31:02 +00:00
Renovate Bot 74ab8abe8b chore(deps): update dependency bulma to v1
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 19:30:58 +00:00
Renovate Bot 57694b469a fix(deps): update tonic monorepo to 0.11.0
Some checks failed
ci/woodpecker/pr/woodpecker Pipeline failed
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-07 19:30:54 +00:00
Renovate Bot ee126cff49 fix(deps): update rust crate uuid to v1.8.0
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 19:30:49 +00:00
Renovate Bot 3ff434ea0c fix(deps): update rust crate derive_builder to 0.20.0
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 19:30:42 +00:00
Renovate Bot 37a9a0342d chore(deps): update dependency locust to v2.29.0
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 19:30:30 +00:00
Aravinth Manivannan 7cecb8e985 Merge pull request 'fix(deps): update rust crate config to 0.14' (#37) from renovate/config-0.x into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #37
2024-06-08 00:37:32 +05:30
Aravinth Manivannan de15326c71 Merge pull request 'fix(deps): update rust crate tokio to v1.38.0' (#41) from renovate/tokio-1.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #41
2024-06-08 00:37:16 +05:30
Aravinth Manivannan 9cd6591631 Merge pull request 'fix(deps): update rust crate pretty_env_logger to 0.5.0' (#40) from renovate/pretty_env_logger-0.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #40
2024-06-08 00:37:11 +05:30
Aravinth Manivannan f9e40fd0f8 Merge pull request 'fix(deps): update rust crate clap to v4.5.6' (#36) from renovate/clap-4.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #36
2024-06-08 00:37:03 +05:30
Aravinth Manivannan 03cefb4f45 Merge pull request 'chore(deps): update rust crate base64 to 0.22.0' (#35) from renovate/base64-0.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #35
2024-06-08 00:36:58 +05:30
Aravinth Manivannan 4bf46f6957 Merge pull request 'chore(deps): update dependency itsdangerous to v2.2.0' (#30) from renovate/itsdangerous-2.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #30
2024-06-08 00:36:55 +05:30
Renovate Bot e1190647a3 fix(deps): update rust crate tokio to v1.38.0
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 17:01:46 +00:00
Renovate Bot 753888a3ab fix(deps): update rust crate pretty_env_logger to 0.5.0
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 17:01:41 +00:00
Renovate Bot 642d81c151 fix(deps): update rust crate config to 0.14
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 17:01:22 +00:00
Renovate Bot 62c81eff13 fix(deps): update rust crate clap to v4.5.6
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 17:01:10 +00:00
Renovate Bot f48cb7d4b0 chore(deps): update rust crate base64 to 0.22.0
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 17:01:05 +00:00
Renovate Bot 87185f964e chore(deps): update dependency itsdangerous to v2.2.0
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 17:00:57 +00:00
Aravinth Manivannan 7dc383c183 Merge pull request 'chore(deps): update dependency zope.interface to v6.4.post2' (#34) from renovate/zope.interface-6.x into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #34
2024-06-07 22:22:17 +05:30
Aravinth Manivannan 650f54c47d Merge pull request 'chore(deps): update dependency urllib3 to v2.2.1' (#33) from renovate/urllib3-2.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #33
2024-06-07 22:22:12 +05:30
Aravinth Manivannan 77d388bf37 Merge pull request 'chore(deps): update dependency requests to v2.32.3' (#32) from renovate/requests-2.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #32
2024-06-07 22:22:08 +05:30
Aravinth Manivannan d7aafcc0a0 Merge pull request 'chore(deps): update dependency idna to v3.7' (#29) from renovate/idna-3.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #29
2024-06-07 22:21:58 +05:30
Aravinth Manivannan 1f164763ba Merge pull request 'chore(deps): update dependency grpcio to v1.64.1' (#27) from renovate/grpcio-1.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #27
2024-06-07 22:21:53 +05:30
Aravinth Manivannan 2917167471 Merge pull request 'chore(deps): update dependency geventhttpclient to v2.3.1' (#26) from renovate/geventhttpclient-2.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #26
2024-06-07 22:21:49 +05:30
Aravinth Manivannan 3ef8964551 Merge pull request 'chore(deps): update dependency msgpack to v1.0.8' (#12) from renovate/msgpack-1.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #12
2024-06-07 22:21:45 +05:30
Renovate Bot a8bdf8b101 chore(deps): update dependency zope.interface to v6.4.post2
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:58 +00:00
Renovate Bot fe64fefcd3 chore(deps): update dependency urllib3 to v2.2.1
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:57 +00:00
Renovate Bot 5ae8f1d8e9 chore(deps): update dependency requests to v2.32.3
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:56 +00:00
Renovate Bot f0933b145a chore(deps): update dependency idna to v3.7
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:49 +00:00
Renovate Bot 14ca663da7 chore(deps): update dependency grpcio to v1.64.1
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:42 +00:00
Renovate Bot 67ad8ed916 chore(deps): update dependency geventhttpclient to v2.3.1
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:39 +00:00
Renovate Bot 45f4c86759 chore(deps): update dependency msgpack to v1.0.8
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 14:30:38 +00:00
Aravinth Manivannan ac86a68911 Merge pull request 'chore(deps): update rust crate serde_json to v1.0.117' (#18) from renovate/serde_json-1.x-lockfile into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #18
2024-06-07 19:55:57 +05:30
Aravinth Manivannan 5db9a4fdaf Merge pull request 'chore(deps): update dependency protobuf to v4.25.3' (#13) from renovate/protobuf-4.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #13
2024-06-07 19:55:37 +05:30
Aravinth Manivannan fde65b9967 Merge pull request 'fix(deps): update rust crate actix to v0.13.3' (#19) from renovate/actix-0.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #19
2024-06-07 19:55:33 +05:30
Aravinth Manivannan 27b8b233ae Merge pull request 'fix(deps): update rust crate async-trait to v0.1.80' (#20) from renovate/async-trait-0.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #20
2024-06-07 19:55:29 +05:30
Aravinth Manivannan 927d37d35d Merge pull request 'fix(deps): update rust crate prost to v0.12.6' (#21) from renovate/tokio-prost-monorepo into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #21
2024-06-07 19:55:25 +05:30
Aravinth Manivannan a57c008bb6 Merge pull request 'fix(deps): update rust crate serde to v1.0.203' (#22) from renovate/serde-monorepo into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #22
2024-06-07 19:55:21 +05:30
Aravinth Manivannan 52e1544b8a Merge pull request 'fix(deps): update rust crate tokio-stream to v0.1.15' (#23) from renovate/tokio-stream-0.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #23
2024-06-07 19:55:16 +05:30
Aravinth Manivannan dc271018e9 Merge pull request 'chore(deps): update dependency blinker to v1.8.2' (#24) from renovate/blinker-1.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #24
2024-06-07 19:55:12 +05:30
Aravinth Manivannan 92a48574a9 Merge pull request 'chore(deps): update dependency bulma to v0.9.4' (#25) from renovate/bulma-0.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #25
2024-06-07 19:55:07 +05:30
Renovate Bot 31ac9ea859 chore(deps): update dependency bulma to v0.9.4
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:59 +00:00
Renovate Bot e34fc6b52f chore(deps): update dependency blinker to v1.8.2
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:58 +00:00
Renovate Bot 04f74b8a5d fix(deps): update rust crate tokio-stream to v0.1.15
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:55 +00:00
Renovate Bot d4b11163b6 fix(deps): update rust crate serde to v1.0.203
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:52 +00:00
Renovate Bot 6c3d57743c fix(deps): update rust crate prost to v0.12.6
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:48 +00:00
Renovate Bot 8447a9fabf fix(deps): update rust crate async-trait to v0.1.80
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:45 +00:00
Renovate Bot db1f226360 fix(deps): update rust crate actix to v0.13.3
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:41 +00:00
Renovate Bot 62f7503910 chore(deps): update dependency protobuf to v4.25.3
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:02:32 +00:00
Aravinth Manivannan 70b97b1d4f Merge pull request 'chore(deps): update dependency flask-cors to v4.0.1' (#8) from renovate/flask-cors-4.x into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #8
2024-06-07 15:31:44 +05:30
Aravinth Manivannan 62cc4c43e5 Merge pull request 'chore(deps): update dependency greenlet to v3.0.3' (#9) from renovate/greenlet-3.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #9
2024-06-07 15:31:38 +05:30
Aravinth Manivannan 579310f7a9 Merge pull request 'chore(deps): update dependency jinja2 to v3.1.4' (#10) from renovate/jinja2-3.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #10
2024-06-07 15:31:31 +05:30
Aravinth Manivannan 619c39d102 Merge pull request 'chore(deps): update dependency markupsafe to v2.1.5' (#11) from renovate/markupsafe-2.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #11
2024-06-07 15:31:24 +05:30
Aravinth Manivannan 394b42a414 Merge pull request 'chore(deps): update dependency flask to v3.0.3' (#7) from renovate/flask-3.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #7
2024-06-07 15:31:11 +05:30
Renovate Bot 1a6986203f chore(deps): update rust crate serde_json to v1.0.117
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-07 10:01:06 +00:00
Aravinth Manivannan 7d5ca30ae8 Merge pull request 'chore(deps): update dependency psutil to v5.9.8' (#14) from renovate/psutil-5.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #14
2024-06-07 15:30:59 +05:30
Aravinth Manivannan eb95c05651 Merge pull request 'chore(deps): update dependency werkzeug to v3.0.3' (#15) from renovate/werkzeug-3.x into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #15
2024-06-07 15:30:52 +05:30
Aravinth Manivannan 2b0ce36477 Merge pull request 'chore(deps): update rust crate anyhow to v1.0.86' (#16) from renovate/anyhow-1.x-lockfile into master
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
Reviewed-on: #16
2024-06-07 15:30:46 +05:30
Renovate Bot 8387a6338e chore(deps): update rust crate anyhow to v1.0.86
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:01:19 +00:00
Renovate Bot 8cc021d1d1 chore(deps): update dependency werkzeug to v3.0.3
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:01:05 +00:00
Renovate Bot 29aa823dc6 chore(deps): update dependency psutil to v5.9.8
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:01:03 +00:00
Renovate Bot 5f66f088ac chore(deps): update dependency markupsafe to v2.1.5
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:00:59 +00:00
Renovate Bot 92c2437cf9 chore(deps): update dependency jinja2 to v3.1.4
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:00:58 +00:00
Renovate Bot 172035626b chore(deps): update dependency greenlet to v3.0.3
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:00:56 +00:00
Renovate Bot c32b1b717e chore(deps): update dependency flask-cors to v4.0.1
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:00:55 +00:00
Renovate Bot d11ebbc50a chore(deps): update dependency flask to v3.0.3
All checks were successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 08:00:53 +00:00
Aravinth Manivannan a6a48eec47 Merge pull request 'chore: Configure Renovate' (#6) from renovate/configure into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #6
2024-06-04 13:04:52 +05:30
Renovate Bot 0ff5669e96 Add renovate.json
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2024-06-04 07:34:00 +00:00
Aravinth Manivannan 2ae3d9625e Merge pull request 'Integration an unit tests' (#4) from feat-integration-tests into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #4
2023-12-31 03:25:29 +05:30
Aravinth Manivannan ff71e35da3
fix: create virtualenv
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2023-12-31 03:05:06 +05:30
Aravinth Manivannan d7fe9332d6
debug: mv dcache_py into tess
Some checks failed
ci/woodpecker/pr/woodpecker Pipeline is pending
ci/woodpecker/push/woodpecker Pipeline failed
2023-12-31 03:04:34 +05:30
Aravinth Manivannan a10fb878f5
fix: rm stub
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline failed
2023-12-31 03:03:00 +05:30
Aravinth Manivannan f47c0867d3
fix: use python virtualenv 2023-12-31 03:00:36 +05:30
Aravinth Manivannan ae6651e624
fix: rm stray command 2023-12-31 03:00:36 +05:30
Aravinth Manivannan 73aa755035
fix: CI: use different stage name for integration tests 2023-12-31 03:00:36 +05:30
Aravinth Manivannan 240b5ec13a
chore: linting 2023-12-31 03:00:36 +05:30
Aravinth Manivannan e548a532a0
feat: add integration testing 2023-12-31 03:00:36 +05:30
Aravinth Manivannan c72688656f
feat: add integration tests 2023-12-31 03:00:36 +05:30
Aravinth Manivannan 853ed44ba7 Merge pull request 'feat: benchmark report' (#5) from benchmark-report into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #5
2023-12-31 02:54:50 +05:30
Aravinth Manivannan a82b9044d5
feat: benchmark report
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2023-12-31 02:54:11 +05:30
Aravinth Manivannan f20d044537
feat: add readme
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-12-31 01:57:03 +05:30
Aravinth Manivannan 9b281151e7 Merge pull request 'Use atomic types to speedup variable difficulty alogirthm' (#3) from optimize-libmcaptha into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #3
2023-12-31 01:31:22 +05:30
Aravinth Manivannan fd9ac9d312
feat: implement RetrievePow, DeletePow, CacheResult, VerifyCaptchaResult, DeleteCaptchaResult, CaptchaExists& GetVisitorCount RPCs
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2023-12-31 01:21:32 +05:30
Aravinth Manivannan b8a2a026d6
feat: test variable difficulty driver
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
2023-12-30 20:28:05 +05:30
Aravinth Manivannan 1f4d3574ab
feat: use DashMap to speedup hashcash implementation
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
2023-12-30 18:00:09 +05:30
Aravinth Manivannan c11f89094a
feat: post bottleneck single req and pipeline reqs results
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
2023-12-30 16:19:25 +05:30
Aravinth Manivannan 20296d5a70
feat: use faster counter impl 2023-12-30 16:19:24 +05:30
Aravinth Manivannan 45a49288b7
feat: custom counter implementation to solve bottleneck 2023-12-30 16:19:24 +05:30
Aravinth Manivannan 59180fd86f
feat: connection pool for log replication 2023-12-30 16:19:24 +05:30
Aravinth Manivannan 41e438828c
feat: bottlenect diagnosis and solution flamegraphs 2023-12-30 16:19:24 +05:30
Aravinth Manivannan 0f6d9e387f
feat: identify libmcaptcha bottleneck 2023-12-30 16:19:23 +05:30
Aravinth Manivannan b4469e03d0 Merge pull request 'feat-protobuf' (#1) from feat-protobuf into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #1
2023-12-30 14:24:35 +05:30
Aravinth Manivannan 01d4c2fce3
feat: install protoc
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2023-12-30 14:18:15 +05:30
Aravinth Manivannan aed464fe38 Merge pull request 'benches' (#2) from benches into master
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Reviewed-on: #2
2023-12-30 14:07:18 +05:30
Aravinth Manivannan 59b847b740
feat: bench pipeline implementaion
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline failed
2023-12-29 16:07:45 +05:30
Aravinth Manivannan a288450721
feat: batch RPC
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-12-29 15:58:36 +05:30
Aravinth Manivannan 0f5762536b
debug: pipeline dump 2023-12-28 19:15:07 +05:30
Aravinth Manivannan 565ffec3c6
feat: port locust to use GRPC payloads
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline failed
2023-12-28 14:23:45 +05:30
Aravinth Manivannan 69172db518
feat: add nojson results 2023-12-28 14:16:58 +05:30
Aravinth Manivannan 67051bc187
feat: call individual RPC methods without the JSON hack in test script 2023-12-28 13:45:43 +05:30
Aravinth Manivannan 3ad3d280d9
feat: use protobuf without JSON hack 2023-12-28 13:45:01 +05:30
Aravinth Manivannan 337f89f25a
feat: use grpc within locust
Some checks failed
ci/woodpecker/pr/woodpecker Pipeline is pending
ci/woodpecker/push/woodpecker Pipeline failed
2023-12-28 13:44:21 +05:30
Aravinth Manivannan e50b7a5751
chore: log learner joins 2023-12-28 13:43:53 +05:30
Aravinth Manivannan 4446cd83bd
feat: python grpc test client
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline failed
2023-12-27 19:48:51 +05:30
Aravinth Manivannan 6d90790e58
fix: actix actors will only start on actix_rt 2023-12-27 19:44:22 +05:30
Aravinth Manivannan ad77db65f3
feat: add nopooling(pipeline+nopipeline) and pooled pipeline benches
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2023-12-26 18:46:18 +05:30
Aravinth Manivannan bebab4c8a5
feat: conn pool no pipeline
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
note: lost previous benchmarks, so redoing
2023-12-26 18:38:06 +05:30
Aravinth Manivannan ba9694b31f
chore: cleanup deadcode from REST impl
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
ci/woodpecker/pr/woodpecker Pipeline failed
2023-12-26 15:13:41 +05:30
Aravinth Manivannan a2dd2c31f6
feat: add protobuf bench
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-12-26 14:59:43 +05:30
Aravinth Manivannan 70ef43b720
feat: use protobuf for RPC
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-12-26 14:58:55 +05:30
103 changed files with 12468 additions and 1768 deletions

1
.gitignore vendored
View file

@ -156,3 +156,4 @@ keys
htmlcov/
tmp/
static/
nohup.out

View file

@ -2,11 +2,26 @@ steps:
backend:
image: rust
commands:
- apt update
- apt-get install -y --no-install-recommends protobuf-compiler
- cargo build
- cargo test --lib
# - make migrate
# - make
# - make release
# - make test // requires Docker-in-Docker
integration_tests:
image: python
commands:
- pip install virtualenv && virtualenv venv
- . venv/bin/activate && pip install -r requirements.txt
- nohup ./target/debug/main --id 1 --http-addr 127.0.0.1:9001 --introducer-addr 127.0.0.1:9001 --introducer-id 1 --cluster-size 3 &
- sleep 1
- nohup ./target/debug/main --id 2 --http-addr 127.0.0.1:9002 --introducer-addr 127.0.0.1:9001 --introducer-id 1 --cluster-size 3 &
- sleep 1
- nohup ./target/debug/main --id 3 --http-addr 127.0.0.1:9003 --introducer-addr 127.0.0.1:9001 --introducer-id 1 --cluster-size 3 &
- mv dcache_py/ tests/
- . venv/bin/activate && python tests/test.py
build_docker_img:
image: plugins/docker

2162
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -2,6 +2,7 @@
name = "dcache"
version = "0.1.0"
edition = "2021"
build = "build.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -12,33 +13,40 @@ openraft = { version = "0.8.8", features = ["serde", "single-term-leader"]}
#libmcaptcha = { path="/src/atm/code/mcaptcha/libmcaptcha", features=["full"] }
libmcaptcha = { git = "https://github.com/mcaptcha/libmcaptcha", branch = "feat-dcache", features = ["full"]}
tracing = { version = "0.1.37", features = ["log"] }
serde_json = "1.0.96"
serde = { version = "1.0.163", features = ["derive"] }
serde_json = "1"
serde = { version = "1", features = ["derive"] }
byteorder = "1.4.3"
actix-web = "4"
actix-web-httpauth = "0.8.0"
futures-util = { version = "0.3.17", default-features = false, features = ["std"] }
lazy_static = "1.4.0"
pretty_env_logger = "0.4.0"
pretty_env_logger = "0.5.0"
uuid = { version = "1", features = ["v4"] }
actix-web-codegen-const-routes = { version = "0.1.0", tag = "0.1.0", git = "https://github.com/realaravinth/actix-web-codegen-const-routes" }
derive_builder = "0.11.2"
config = { version = "0.11", features = ["toml"] }
derive_builder = "0.20.0"
config = { version = "0.14", features = ["toml"] }
derive_more = "0.99.17"
url = { version = "2.2.2", features = ["serde"]}
async-trait = "0.1.36"
clap = { version = "4.1.11", features = ["derive", "env"] }
reqwest = { version = "0.11.9", features = ["json"] }
tokio = { version = "1.0", default-features = false, features = ["sync"] }
tokio = { version = "1.0", default-features = false, features = ["sync", "macros", "rt-multi-thread", "time"] }
tracing-subscriber = { version = "0.3.0", features = ["env-filter"] }
actix = "0.13.0"
tonic = { version = "0.11.0", features = ["transport", "channel"] }
prost = "0.12.3"
tokio-stream = "0.1.14"
async-stream = "0.3.5"
actix-rt = "2.9.0"
futures = "0.3.30"
tower-service = "0.3.2"
dashmap = { version = "5.5.3", features = ["serde"] }
[build-dependencies]
serde_json = "1"
tonic-build = "0.11.0"
[dev-dependencies]
actix-rt = "2.7.0"
base64 = "0.13.0"
base64 = "0.22.0"
anyhow = "1.0.63"
maplit = "1.0.2"
#[profile.release]
#debug = true

View file

@ -4,6 +4,7 @@
FROM rust:latest as rust
WORKDIR /src
RUN apt update && apt-get install -y --no-install-recommends protobuf-compiler
COPY . .
RUN cargo build --release

6
Makefile Normal file
View file

@ -0,0 +1,6 @@
python.gen:
. venv/bin/activate && cd dcache_py
python -m grpc_tools.protoc \
-I=./proto/dcache/ --python_out=dcache_py/ \
--pyi_out=dcache_py/ \
--grpc_python_out=dcache_py/ ./proto/dcache/dcache.proto

44
README.md Normal file
View file

@ -0,0 +1,44 @@
[![status-badge](https://ci.batsense.net/api/badges/105/status.svg)](https://ci.batsense.net/repos/105)
---
# dcache: Distributed, Highly Available cache implementation for mCaptcha
## Overview
- Uses Raft consensus algorithm via [openraft](https://crates.io/crates/openraft)
- GRPC via [tonic](https://crates.io/crates/tonic)
## Tips
We recommend running at least three instances of dcache in your
deployment.
**NOTE: Catastrophic failure will occur when n/2 + 1 instances are
down.**
## Usage
## Firewall configuration
dcache uses a single, configurable port for both server-to-server and client-to-server
communications. Please open that port on your server.
## Launch
```bash
dcache --id 1 \
--http-addr 127.0.0.1:9001 \
--introducer-addr 127.0.0.1:9001 \
--introducer-id 1 \
--cluster-size 3
```
### Options
| Name | Purpose |
| ----------------- | ----------------------------------------------------------- |
| --id | Unique integer to identify node in network |
| --http-addr | Socket address to bind and listen for connections |
| --introducer-addr | Socket address of introducer node; required to join network |
| --intdocuer-id | ID of the introducer node; required to join network |
| --cluster-size | Total size of the cluster |

64
bench/adaptor.py Normal file
View file

@ -0,0 +1,64 @@
import time
from typing import Any, Callable
import grpc
import grpc.experimental.gevent as grpc_gevent
from grpc_interceptor import ClientInterceptor
from locust import User
from locust.exception import LocustError
# patch grpc so that it uses gevent instead of asyncio
grpc_gevent.init_gevent()
class LocustInterceptor(ClientInterceptor):
def __init__(self, environment, *args, **kwargs):
super().__init__(*args, **kwargs)
self.env = environment
def intercept(
self,
method: Callable,
request_or_iterator: Any,
call_details: grpc.ClientCallDetails,
):
response = None
exception = None
start_perf_counter = time.perf_counter()
response_length = 0
try:
response = method(request_or_iterator, call_details)
response_length = response.result().ByteSize()
except grpc.RpcError as e:
exception = e
self.env.events.request.fire(
request_type="grpc",
name=call_details.method,
response_time=(time.perf_counter() - start_perf_counter) * 1000,
response_length=response_length,
response=response,
context=None,
exception=exception,
)
return response
class GrpcUser(User):
abstract = True
stub_class = None
def __init__(self, environment):
super().__init__(environment)
for attr_value, attr_name in (
(self.host, "host"),
(self.stub_class, "stub_class"),
):
if attr_value is None:
raise LocustError(f"You must specify the {attr_name}.")
self._channel = grpc.insecure_channel(self.host)
interceptor = LocustInterceptor(environment=environment)
self._channel = grpc.intercept_channel(self._channel, interceptor)
self.stub = self.stub_class(self._channel)

View file

@ -1,95 +1,69 @@
import json
import time
import grpc
import gevent
from pprint import pprint
from locust import FastHttpUser, between, task
from locust import FastHttpUser, between, task, events
password = "fooobarasdfasdf"
username = "realaravinth"
from dcache_py import dcache_pb2 as dcache
from dcache_py.dcache_pb2 import RaftRequest
from dcache_py.dcache_pb2_grpc import DcacheServiceStub
import adaptor
host = "localhost:9001"
captcha_id = "locust"
class Unprotected(FastHttpUser):
# wait_time = between(5, 15)
peers = [
"http://localhost:9001",
"http://localhost:9002",
"http://localhost:9003",
]
leader = "http://localhost:9001"
host = leader
captcha_id="locust"
pipeline_vote = []
for _ in range(0,1000):
pipeline_vote.append({"AddVisitor": captcha_id})
# def on_start(self):
# resp = self.client.get(f"{self.leader}/metrics")
# data = resp.json()
# leader = data["Ok"]["membership_config"]["log_id"]["leader_id"]["node_id"]
# self.leader = self.peers[leader - 1]
# self.host = self.leader
# print(f"Leader: {self.host}")
# self.add_captcha(captcha_id="locust")
def add_captcha(stub: DcacheServiceStub, captcha_id: str):
msg = dcache.AddCaptchaRequest(
id=captcha_id,
mcaptcha=dcache.MCaptcha(
duration=30,
defense=dcache.Defense(
levels=[
dcache.Level(visitor_threshold=50, difficulty_factor=500),
dcache.Level(visitor_threshold=5000, difficulty_factor=50000),
]
),
),
)
resp = stub.AddCaptcha(msg)
pprint(f"Captcha added {captcha_id}: {resp}")
def write(self, data):
resp = self.client.post(f"{self.host}/write", json=data)
print(f"RPC Status: {resp.status_code}")
resp = resp.json()
if "Err" in resp:
leader = resp["Err"]["APIError"]["ForwardToLeader"]["leader_node"]["addr"]
print(f"Forwarding write to leader {leader}")
return write(leader, data)
return resp["Ok"]["data"]
def pipeline_write(self, data):
resp = self.client.post(f"{self.host}/pipeline/write", json=data)
# print(f"RPC Status: {resp.status_code}")
resp = resp.json()
if "Err" in resp:
leader = resp["Err"]["APIError"]["ForwardToLeader"]["leader_node"]["addr"]
print(f"Forwarding write to leader {leader}")
return write(leader, data)
return resp
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
add_captcha(stub=stub, captcha_id=captcha_id)
pipeline_msgs = []
for _ in range(0,10):
pipeline_msgs.append(dcache.DcacheRequest(addVisitor=dcache.CaptchaID(id=captcha_id)))
pipeline_msgs = dcache.DcacheBatchRequest(requests=pipeline_msgs)
#def pipeline_generate_messages():
# for msg in pipeline_msgs:
# yield msg
class HelloGrpcUser(adaptor.GrpcUser):
stub_class = DcacheServiceStub
host = host
captcha_id = captcha_id
msg = dcache.CaptchaID(id=captcha_id)
def add_vote(self, captcha_id: str):
resp = self.write(data={"AddVisitor": captcha_id})
pprint(resp)
resp = self.stub.AddVisitor(self.msg)
def add_vote_pipeline(self, captcha_id: str):
resp = self.pipeline_write(data=self.pipeline_vote)
# pprint(resp)
def add_captcha(self, captcha_id: str):
params = {
"AddCaptcha": {
"id": captcha_id,
"mcaptcha": {
"visitor_threshold": 0,
"defense": {
"levels": [
{"visitor_threshold": 50, "difficulty_factor": 500},
{"visitor_threshold": 5000, "difficulty_factor": 50000},
],
"current_visitor_threshold": 0,
},
"duration": 30,
},
}
}
resp = self.write(data=params)
pprint(f"Captcha added {captcha_id}: {resp}")
def add_vote_pipeline(self):
res = self.stub.PipelineDcacheOps(pipeline_msgs)
# @task
# def addVote(self):
# self.add_vote(self.captcha_id)
@task
def unprotected(self):
self.add_vote_pipeline(captcha_id=self.captcha_id)
##self.add_vote(captcha_id="locust")
# data = {
# "username": username,
# "password": username,
# "confirm_password": username,
# }
# self.client.post("/unprotected", data=data)
def addVotePipeline(self):
self.add_vote_pipeline()

View file

@ -1,27 +1,27 @@
blinker==1.7.0
blinker==1.8.2
Brotli==1.1.0
certifi==2023.11.17
certifi==2024.6.2
charset-normalizer==3.3.2
click==8.1.7
ConfigArgParse==1.7
Flask==3.0.0
Flask==3.0.3
Flask-BasicAuth==0.2.0
Flask-Cors==4.0.0
gevent==23.9.1
geventhttpclient==2.0.11
greenlet==3.0.2
idna==3.6
itsdangerous==2.1.2
Jinja2==3.1.2
locust==2.20.0
MarkupSafe==2.1.3
msgpack==1.0.7
psutil==5.9.7
pyzmq==25.1.2
requests==2.31.0
Flask-Cors==4.0.1
gevent==24.2.1
geventhttpclient==2.3.1
greenlet==3.0.3
idna==3.7
itsdangerous==2.2.0
Jinja2==3.1.4
locust==2.29.0
MarkupSafe==2.1.5
msgpack==1.0.8
psutil==5.9.8
pyzmq==26.0.3
requests==2.32.3
roundrobin==0.0.4
six==1.16.0
urllib3==2.1.0
Werkzeug==3.0.1
urllib3==2.2.1
Werkzeug==3.0.3
zope.event==5.0
zope.interface==6.1
zope.interface==6.4.post2

212
bench/results/README.md Normal file
View file

@ -0,0 +1,212 @@
# Benchmark Report
Benchmarks were run at various stages of development to keep track of
performance. Tech stacks were changed and the implementation optimized
to increase throughput. This report summarizes the findings of the
benchmarks
Ultimately, we were able to identify a bottleneck that was previously
hidden in mCaptcha (hidden because a different bottleneck like DB access
eclipsed it :p) [and were able to increase performance of the critical
path by ~147 times](https://git.batsense.net/mCaptcha/dcache/pulls/3)
through a trivial optimization.
## Environment
These benchmarks were run on a noisy development laptop and should be
used for guidance only.
- CPU: AMD Ryzen 5 5600U with Radeon Graphics (12) @ 4.289GHz
- Memory: 22849MiB
- OS: Arch Linux x86_64
- Kernel: 6.6.7-arch1-1
- rustc: 1.73.0 (cc66ad468 2023-10-03)
## Baseline: Tech stack version 1
Actix Web based networking with JSON for message format. Was chosen for
prototyping, and was later used to set a baseline.
## Without connection pooling in server-to-server communications
### Single requests (no batching)
<details>
<summary>Peak throughput observed was 1117 request/second (please click
to see charts)</summary>
#### Total number of requests vs time
![number of requests](./v1/nopooling/nopipelining/total_requests_per_second_1703969194.png)
#### Response times(ms) vs time
![repsonse times(ms)](<./v1/nopooling/nopipelining/response_times_(ms)_1703969194.png>)
#### Number of concurrent users vs time
![number of concurrent
users](./v1/nopooling/nopipelining/number_of_users_1703969194.png)
</details>
### Batched requests
<details>
<summary>
Each network request contained 1,000 application requests, so peak throughput observed was 1,800 request/second.
Please click to see charts</summary>
#### Total number of requests vs time
![number of requests](./v1/pooling/pipelining/total_requests_per_second_1703968582.png)
#### Response times(ms) vs time
![repsonse times(ms)](<./v1/pooling/pipelining/response_times_(ms)_1703968582.png>))
#### Number of concurrent users vs time
![number of concurrent
users](./v1/pooling/pipelining/number_of_users_1703968582.png)
</details>
## With connection pooling in server-to-server communications
### Single requests (no batching)
<details>
<summary>
Peak throughput observed was 3904 request/second. Please click to see
charts</summary>
#### Total number of requests vs time
![number of requests](./v1/pooling/nopipelining/total_requests_per_second_1703968214.png)
#### Response times(ms) vs time
![repsonse times(ms)](<./v1/pooling/nopipelining/response_times_(ms)_1703968215.png>)
#### Number of concurrent users vs time
![number of concurrent
users](./v1/pooling/nopipelining/number_of_users_1703968215.png)
</details>
### Batched requests
<details>
<summary>
Each network request contained 1,000 application requests, so peak throughput observed was 15,800 request/second.
Please click to see charts.
</summary>
#### Total number of requests vs time
![number of requests](./v1/pooling/pipelining/total_requests_per_second_1703968582.png)
#### Response times(ms) vs time
![repsonse times(ms)](<./v1/pooling/pipelining/response_times_(ms)_1703968582.png>))
#### Number of concurrent users vs time
![number of concurrent
users](./v1/pooling/pipelining/number_of_users_1703968582.png)
</details>
## Tech stack version 2
Tonic for the network stack and GRPC for wire format. We ran over a
dozen benchmarks with this tech stack. The trend was similar to the ones
observed above: throughput was higher when connection pool was used and
even higher when requests were batched. _But_ the throughput of all of these benchmarks were lower than the
baseline benchmarks!
The CPU was busier. We put it through
[flamgragh](https://github.com/flamegraph-rs/flamegraph) and hit it with
the same test suite to identify compute-heavy areas. The result was
unexpected:
![flamegraph indicating libmcaptcha being
slow](./v2/libmcaptcha-bottleneck/problem/flamegraph.svg)
libmCaptcha's [AddVisitor
handler](https://github.com/mCaptcha/libmcaptcha/blob/e3f456f35b2c9e55e0475b01b3e05d48b21fd51f/src/master/embedded/counter.rs#L124)
was taking up 59% of CPU time of the entire test run. This is a very
critical part of the variable difficulty factor PoW algorithm that
mCaptcha uses. We never ran into this bottleneck before because in other
cache implementations, it was always preceded with a database request.
It surfaced here as we are using in-memory data sources in dcache.
libmCaptcha uses an actor-based approach with message passing for clean
concurrent state management. Message passing is generally faster in most
cases, but in our case, sharing memory using CPU's concurrent primitives
turned out to be significantly faster:
![flamegraph indicating libmcaptcha being
slow](./v2/libmcaptcha-bottleneck/solution/flamegraph.svg)
CPU time was reduced from 59% to 0.4%, roughly by one 147 times!
With this fix in place:
### Connection pooled server-to-server communications, single requests (no batching)
Peak throughput observed was 4816 request/second, ~1000 requests/second
more than baseline.
#### Total number of requests vs time
![number of requests](./v2/grpc-conn-pool-post-bottleneck/single/total_requests_per_second_1703970940.png)
#### Response times(ms) vs time
![repsonse times(ms)](./v2/grpc-conn-pool-post-bottleneck/single/response_times_(ms)_1703970940.png)
#### Number of concurrent users vs time
![number of concurrent
users](./v2/grpc-conn-pool-post-bottleneck/single/number_of_users_1703970940.png)
### Connection pooled server-to-server communications, batched requests
Each network request contained 1,000 application requests, so peak throughput observed was 95,700 request/second. This six times higher than baseline.
Please click to see charts.
#### Total number of requests vs time
![number of requests](./v2/grpc-conn-pool-post-bottleneck/pipeline/total_requests_per_second_1703971082.png)
#### Response times(ms) vs time
![repsonse times(ms)](./v2/grpc-conn-pool-post-bottleneck/pipeline/response_times_(ms)_1703971082.png)
#### Number of concurrent users vs time
![number of concurrent
users](./v2/grpc-conn-pool-post-bottleneck/pipeline/number_of_users_1703971082.png)
</details>

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
POST,http://localhost:9001/write,70718,0,300,303.8036002149382,5,939,97.84323651686982,1117.209290974752,0.0,300,340,370,390,430,470,510,550,770,930,940
,Aggregated,70718,0,300,303.8036002149382,5,939,97.84323651686982,1117.209290974752,0.0,300,340,370,390,430,470,510,550,770,930,940
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 POST http://localhost:9001/write 70718 0 300 303.8036002149382 5 939 97.84323651686982 1117.209290974752 0.0 300 340 370 390 430 470 510 550 770 930 940
3 Aggregated 70718 0 300 303.8036002149382 5 939 97.84323651686982 1117.209290974752 0.0 300 340 370 390 430 470 510 550 770 930 940

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
POST,http://localhost:9001/pipeline/write,70,0,9700,9854.628571428571,7898,12282,98842.68571428572,0.9866591262400619,0.0,9700,10000,11000,11000,12000,12000,12000,12000,12000,12000,12000
,Aggregated,70,0,9700,9854.628571428571,7898,12282,98842.68571428572,0.9866591262400619,0.0,9700,10000,11000,11000,12000,12000,12000,12000,12000,12000,12000
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 POST http://localhost:9001/pipeline/write 70 0 9700 9854.628571428571 7898 12282 98842.68571428572 0.9866591262400619 0.0 9700 10000 11000 11000 12000 12000 12000 12000 12000 12000 12000
3 Aggregated 70 0 9700 9854.628571428571 7898 12282 98842.68571428572 0.9866591262400619 0.0 9700 10000 11000 11000 12000 12000 12000 12000 12000 12000 12000

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
POST,http://localhost:9001/write,148740,0,6,7.030960064542154,0,485,98.25319349199947,3904.7975720558998,0.0,6,8,9,10,12,15,20,25,170,380,480
,Aggregated,148740,0,6,7.030960064542154,0,485,98.25319349199947,3904.7975720558998,0.0,6,8,9,10,12,15,20,25,170,380,480
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 POST http://localhost:9001/write 148740 0 6 7.030960064542154 0 485 98.25319349199947 3904.7975720558998 0.0 6 8 9 10 12 15 20 25 170 380 480
3 Aggregated 148740 0 6 7.030960064542154 0 485 98.25319349199947 3904.7975720558998 0.0 6 8 9 10 12 15 20 25 170 380 480

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
POST,http://localhost:9001/pipeline/write,673,0,620,625.5542347696879,448,734,99835.94056463595,15.80574909851346,0.0,620,640,650,650,680,700,720,720,730,730,730
,Aggregated,673,0,620,625.5542347696879,448,734,99835.94056463595,15.80574909851346,0.0,620,640,650,650,680,700,720,720,730,730,730
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 POST http://localhost:9001/pipeline/write 673 0 620 625.5542347696879 448 734 99835.94056463595 15.80574909851346 0.0 620 640 650 650 680 700 720 720 730 730 730
3 Aggregated 673 0 620 625.5542347696879 448 734 99835.94056463595 15.80574909851346 0.0 620 640 650 650 680 700 720 720 730 730 730

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
grpc,/dcache.DcacheService/PipelineDcacheOps,3480,0,98,104.35343347919283,85.40578499378171,842.1087349997833,14999.985632183909,95.67244900465325,0.0,98,99,100,100,100,110,120,360,840,840,840
,Aggregated,3480,0,98,104.35343347919283,85.40578499378171,842.1087349997833,14999.985632183909,95.67244900465325,0.0,98,99,100,100,100,110,120,360,840,840,840
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 grpc /dcache.DcacheService/PipelineDcacheOps 3480 0 98 104.35343347919283 85.40578499378171 842.1087349997833 14999.985632183909 95.67244900465325 0.0 98 99 100 100 100 110 120 360 840 840 840
3 Aggregated 3480 0 98 104.35343347919283 85.40578499378171 842.1087349997833 14999.985632183909 95.67244900465325 0.0 98 99 100 100 100 110 120 360 840 840 840

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
grpc,/dcache.DcacheService/AddVisitor,186109,0,79,74.60541254397303,3.7561320059467107,119.94536400015932,10.999731340236098,4816.33283284295,0.0,79,83,86,89,93,97,100,110,120,120,120
,Aggregated,186109,0,79,74.60541254397303,3.7561320059467107,119.94536400015932,10.999731340236098,4816.33283284295,0.0,79,83,86,89,93,97,100,110,120,120,120
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 grpc /dcache.DcacheService/AddVisitor 186109 0 79 74.60541254397303 3.7561320059467107 119.94536400015932 10.999731340236098 4816.33283284295 0.0 79 83 86 89 93 97 100 110 120 120 120
3 Aggregated 186109 0 79 74.60541254397303 3.7561320059467107 119.94536400015932 10.999731340236098 4816.33283284295 0.0 79 83 86 89 93 97 100 110 120 120 120

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View file

@ -0,0 +1,7 @@
#!/bin/bash
ghz --insecure --proto ./proto/dcache/dcache.proto --call dcache.DcacheService.Write \
-c 300 -n 30000 --rps 4000 -O html -o out.html \
-d '{"data":"{\"AddVisitor\": \"test_1\"}"}' \
localhost:9001
# -d '{"data":"{\"AddCaptcha\":{\"id\":\"test_1\",\"mcaptcha\":{\"defense\":{\"current_visitor_threshold\":0,\"levels\":[{\"difficulty_factor\":500,\"visitor_threshold\":50},{\"difficulty_factor\":50000,\"visitor_threshold\":5000}]},\"duration\":30,\"visitor_threshold\":0}}}"}' \

View file

@ -0,0 +1 @@
10 messages per batch request

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
grpc,/dcache.DcacheService/PipelineDcacheOps,41383,0,140.0,99.16818701259079,5.581609002547339,182.89305199868977,40.0,650.896214047811,0.0,140,150,150,150,160,160,160,170,180,180,180
,Aggregated,41383,0,140.0,99.16818701259079,5.581609002547339,182.89305199868977,40.0,650.896214047811,0.0,140,150,150,150,160,160,160,170,180,180,180
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 grpc /dcache.DcacheService/PipelineDcacheOps 41383 0 140.0 99.16818701259079 5.581609002547339 182.89305199868977 40.0 650.896214047811 0.0 140 150 150 150 160 160 160 170 180 180 180
3 Aggregated 41383 0 140.0 99.16818701259079 5.581609002547339 182.89305199868977 40.0 650.896214047811 0.0 140 150 150 150 160 160 160 170 180 180 180

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
grpc,/dcache.DcacheService/Write,96465,0,600.0,530.5241670541676,3.931416000114041,2860.153126999876,130.11822940963043,732.274667601832,0.0,600,720,830,880,1100,1200,1300,1500,2300,2900,2900
,Aggregated,96465,0,600.0,530.5241670541676,3.931416000114041,2860.153126999876,130.11822940963043,732.274667601832,0.0,600,720,830,880,1100,1200,1300,1500,2300,2900,2900
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 grpc /dcache.DcacheService/Write 96465 0 600.0 530.5241670541676 3.931416000114041 2860.153126999876 130.11822940963043 732.274667601832 0.0 600 720 830 880 1100 1200 1300 1500 2300 2900 2900
3 Aggregated 96465 0 600.0 530.5241670541676 3.931416000114041 2860.153126999876 130.11822940963043 732.274667601832 0.0 600 720 830 880 1100 1200 1300 1500 2300 2900 2900

View file

@ -0,0 +1 @@
Count,Message,Traceback,Nodes
1 Count Message Traceback Nodes

View file

@ -0,0 +1 @@
Method,Name,Error,Occurrences
1 Method Name Error Occurrences

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,3 @@
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
grpc,/dcache.DcacheService/AddVisitor,358924,0,79,77.86313645947614,3.354386999944836,123.28810700000759,0.0,4842.970815301002,0.0,79,84,86,88,92,96,100,100,110,120,120
,Aggregated,358924,0,79,77.86313645947614,3.354386999944836,123.28810700000759,0.0,4842.970815301002,0.0,79,84,86,88,92,96,100,100,110,120,120
1 Type Name Request Count Failure Count Median Response Time Average Response Time Min Response Time Max Response Time Average Content Size Requests/s Failures/s 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100%
2 grpc /dcache.DcacheService/AddVisitor 358924 0 79 77.86313645947614 3.354386999944836 123.28810700000759 0.0 4842.970815301002 0.0 79 84 86 88 92 96 100 100 110 120 120
3 Aggregated 358924 0 79 77.86313645947614 3.354386999944836 123.28810700000759 0.0 4842.970815301002 0.0 79 84 86 88 92 96 100 100 110 120 120

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 674 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 1.2 MiB

10
build.rs Normal file
View file

@ -0,0 +1,10 @@
fn main() -> Result<(), Box<dyn std::error::Error>> {
// tonic_build::configure()
// .out_dir("protoout")
// .compile(
// &["proto/dcache/dcache.proto"],
// &["proto/dcache"],
// )?;
tonic_build::compile_protos("proto/dcache/dcache.proto")?;
Ok(())
}

9
check.sh Executable file
View file

@ -0,0 +1,9 @@
#!/bin/bash
protoc \
--proto_path=${PWD}/proto \
--proto_path=${PWD}/bufbuild \
--go_out=${PWD} \
--go-grpc_out=${PWD} \
--validate_out="lang=rust:${PWD}" \
${PWD}/proto/dcache/dcache.proto

80
dcache_py/dcache_pb2.py Normal file

File diff suppressed because one or more lines are too long

218
dcache_py/dcache_pb2.pyi Normal file
View file

@ -0,0 +1,218 @@
from google.protobuf.internal import containers as _containers
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Mapping as _Mapping, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class Level(_message.Message):
__slots__ = ("visitor_threshold", "difficulty_factor")
VISITOR_THRESHOLD_FIELD_NUMBER: _ClassVar[int]
DIFFICULTY_FACTOR_FIELD_NUMBER: _ClassVar[int]
visitor_threshold: int
difficulty_factor: int
def __init__(self, visitor_threshold: _Optional[int] = ..., difficulty_factor: _Optional[int] = ...) -> None: ...
class Defense(_message.Message):
__slots__ = ("levels",)
LEVELS_FIELD_NUMBER: _ClassVar[int]
levels: _containers.RepeatedCompositeFieldContainer[Level]
def __init__(self, levels: _Optional[_Iterable[_Union[Level, _Mapping]]] = ...) -> None: ...
class MCaptcha(_message.Message):
__slots__ = ("duration", "defense")
DURATION_FIELD_NUMBER: _ClassVar[int]
DEFENSE_FIELD_NUMBER: _ClassVar[int]
duration: int
defense: Defense
def __init__(self, duration: _Optional[int] = ..., defense: _Optional[_Union[Defense, _Mapping]] = ...) -> None: ...
class AddCaptchaRequest(_message.Message):
__slots__ = ("id", "mcaptcha")
ID_FIELD_NUMBER: _ClassVar[int]
MCAPTCHA_FIELD_NUMBER: _ClassVar[int]
id: str
mcaptcha: MCaptcha
def __init__(self, id: _Optional[str] = ..., mcaptcha: _Optional[_Union[MCaptcha, _Mapping]] = ...) -> None: ...
class RenameCaptchaRequest(_message.Message):
__slots__ = ("name", "rename_to")
NAME_FIELD_NUMBER: _ClassVar[int]
RENAME_TO_FIELD_NUMBER: _ClassVar[int]
name: str
rename_to: str
def __init__(self, name: _Optional[str] = ..., rename_to: _Optional[str] = ...) -> None: ...
class CachePowRequest(_message.Message):
__slots__ = ("string", "difficulty_factor", "duration", "key")
STRING_FIELD_NUMBER: _ClassVar[int]
DIFFICULTY_FACTOR_FIELD_NUMBER: _ClassVar[int]
DURATION_FIELD_NUMBER: _ClassVar[int]
KEY_FIELD_NUMBER: _ClassVar[int]
string: str
difficulty_factor: int
duration: int
key: str
def __init__(self, string: _Optional[str] = ..., difficulty_factor: _Optional[int] = ..., duration: _Optional[int] = ..., key: _Optional[str] = ...) -> None: ...
class CacheResultRequest(_message.Message):
__slots__ = ("token", "key", "duration")
TOKEN_FIELD_NUMBER: _ClassVar[int]
KEY_FIELD_NUMBER: _ClassVar[int]
DURATION_FIELD_NUMBER: _ClassVar[int]
token: str
key: str
duration: int
def __init__(self, token: _Optional[str] = ..., key: _Optional[str] = ..., duration: _Optional[int] = ...) -> None: ...
class DeleteCaptchaResultRequest(_message.Message):
__slots__ = ("token",)
TOKEN_FIELD_NUMBER: _ClassVar[int]
token: str
def __init__(self, token: _Optional[str] = ...) -> None: ...
class CaptchaID(_message.Message):
__slots__ = ("id",)
ID_FIELD_NUMBER: _ClassVar[int]
id: str
def __init__(self, id: _Optional[str] = ...) -> None: ...
class PoID(_message.Message):
__slots__ = ("id",)
ID_FIELD_NUMBER: _ClassVar[int]
id: str
def __init__(self, id: _Optional[str] = ...) -> None: ...
class AddVisitorResult(_message.Message):
__slots__ = ("duration", "difficulty_factor")
DURATION_FIELD_NUMBER: _ClassVar[int]
DIFFICULTY_FACTOR_FIELD_NUMBER: _ClassVar[int]
duration: int
difficulty_factor: int
def __init__(self, duration: _Optional[int] = ..., difficulty_factor: _Optional[int] = ...) -> None: ...
class OptionAddVisitorResult(_message.Message):
__slots__ = ("result",)
RESULT_FIELD_NUMBER: _ClassVar[int]
result: AddVisitorResult
def __init__(self, result: _Optional[_Union[AddVisitorResult, _Mapping]] = ...) -> None: ...
class RaftRequest(_message.Message):
__slots__ = ("data",)
DATA_FIELD_NUMBER: _ClassVar[int]
data: str
def __init__(self, data: _Optional[str] = ...) -> None: ...
class RaftReply(_message.Message):
__slots__ = ("data", "error")
DATA_FIELD_NUMBER: _ClassVar[int]
ERROR_FIELD_NUMBER: _ClassVar[int]
data: str
error: str
def __init__(self, data: _Optional[str] = ..., error: _Optional[str] = ...) -> None: ...
class Learner(_message.Message):
__slots__ = ("id", "addr")
ID_FIELD_NUMBER: _ClassVar[int]
ADDR_FIELD_NUMBER: _ClassVar[int]
id: int
addr: str
def __init__(self, id: _Optional[int] = ..., addr: _Optional[str] = ...) -> None: ...
class CaptchaExistsResponse(_message.Message):
__slots__ = ("exists",)
EXISTS_FIELD_NUMBER: _ClassVar[int]
exists: bool
def __init__(self, exists: bool = ...) -> None: ...
class GetVisitorCountResponse(_message.Message):
__slots__ = ("visitors",)
VISITORS_FIELD_NUMBER: _ClassVar[int]
visitors: int
def __init__(self, visitors: _Optional[int] = ...) -> None: ...
class OptionGetVisitorCountResponse(_message.Message):
__slots__ = ("result",)
RESULT_FIELD_NUMBER: _ClassVar[int]
result: GetVisitorCountResponse
def __init__(self, result: _Optional[_Union[GetVisitorCountResponse, _Mapping]] = ...) -> None: ...
class DcacheRequest(_message.Message):
__slots__ = ("addCaptcha", "addVisitor", "renameCaptcha", "removeCaptcha", "cachePow", "cacheResult", "captchaExists", "getVisitorCount")
ADDCAPTCHA_FIELD_NUMBER: _ClassVar[int]
ADDVISITOR_FIELD_NUMBER: _ClassVar[int]
RENAMECAPTCHA_FIELD_NUMBER: _ClassVar[int]
REMOVECAPTCHA_FIELD_NUMBER: _ClassVar[int]
CACHEPOW_FIELD_NUMBER: _ClassVar[int]
CACHERESULT_FIELD_NUMBER: _ClassVar[int]
CAPTCHAEXISTS_FIELD_NUMBER: _ClassVar[int]
GETVISITORCOUNT_FIELD_NUMBER: _ClassVar[int]
addCaptcha: AddCaptchaRequest
addVisitor: CaptchaID
renameCaptcha: RenameCaptchaRequest
removeCaptcha: CaptchaID
cachePow: CachePowRequest
cacheResult: CacheResultRequest
captchaExists: CaptchaID
getVisitorCount: CaptchaID
def __init__(self, addCaptcha: _Optional[_Union[AddCaptchaRequest, _Mapping]] = ..., addVisitor: _Optional[_Union[CaptchaID, _Mapping]] = ..., renameCaptcha: _Optional[_Union[RenameCaptchaRequest, _Mapping]] = ..., removeCaptcha: _Optional[_Union[CaptchaID, _Mapping]] = ..., cachePow: _Optional[_Union[CachePowRequest, _Mapping]] = ..., cacheResult: _Optional[_Union[CacheResultRequest, _Mapping]] = ..., captchaExists: _Optional[_Union[CaptchaID, _Mapping]] = ..., getVisitorCount: _Optional[_Union[CaptchaID, _Mapping]] = ...) -> None: ...
class DcacheResponse(_message.Message):
__slots__ = ("option_add_visitor_result", "other", "captcha_exists", "get_visitor_count")
OPTION_ADD_VISITOR_RESULT_FIELD_NUMBER: _ClassVar[int]
OTHER_FIELD_NUMBER: _ClassVar[int]
CAPTCHA_EXISTS_FIELD_NUMBER: _ClassVar[int]
GET_VISITOR_COUNT_FIELD_NUMBER: _ClassVar[int]
option_add_visitor_result: OptionAddVisitorResult
other: RaftReply
captcha_exists: CaptchaExistsResponse
get_visitor_count: OptionGetVisitorCountResponse
def __init__(self, option_add_visitor_result: _Optional[_Union[OptionAddVisitorResult, _Mapping]] = ..., other: _Optional[_Union[RaftReply, _Mapping]] = ..., captcha_exists: _Optional[_Union[CaptchaExistsResponse, _Mapping]] = ..., get_visitor_count: _Optional[_Union[OptionGetVisitorCountResponse, _Mapping]] = ...) -> None: ...
class DcacheBatchRequest(_message.Message):
__slots__ = ("requests",)
REQUESTS_FIELD_NUMBER: _ClassVar[int]
requests: _containers.RepeatedCompositeFieldContainer[DcacheRequest]
def __init__(self, requests: _Optional[_Iterable[_Union[DcacheRequest, _Mapping]]] = ...) -> None: ...
class DcacheBatchResponse(_message.Message):
__slots__ = ("responses",)
RESPONSES_FIELD_NUMBER: _ClassVar[int]
responses: _containers.RepeatedCompositeFieldContainer[DcacheResponse]
def __init__(self, responses: _Optional[_Iterable[_Union[DcacheResponse, _Mapping]]] = ...) -> None: ...
class RetrievePowRequest(_message.Message):
__slots__ = ("token", "key")
TOKEN_FIELD_NUMBER: _ClassVar[int]
KEY_FIELD_NUMBER: _ClassVar[int]
token: str
key: str
def __init__(self, token: _Optional[str] = ..., key: _Optional[str] = ...) -> None: ...
class RetrievePowResponse(_message.Message):
__slots__ = ("difficulty_factor", "duration", "key")
DIFFICULTY_FACTOR_FIELD_NUMBER: _ClassVar[int]
DURATION_FIELD_NUMBER: _ClassVar[int]
KEY_FIELD_NUMBER: _ClassVar[int]
difficulty_factor: int
duration: int
key: str
def __init__(self, difficulty_factor: _Optional[int] = ..., duration: _Optional[int] = ..., key: _Optional[str] = ...) -> None: ...
class CaptchaResultVerified(_message.Message):
__slots__ = ("verified",)
VERIFIED_FIELD_NUMBER: _ClassVar[int]
verified: bool
def __init__(self, verified: bool = ...) -> None: ...
class DeletePowRequest(_message.Message):
__slots__ = ("string",)
STRING_FIELD_NUMBER: _ClassVar[int]
string: str
def __init__(self, string: _Optional[str] = ...) -> None: ...
class OptionalRetrievePoWResponse(_message.Message):
__slots__ = ("result",)
RESULT_FIELD_NUMBER: _ClassVar[int]
result: RetrievePowResponse
def __init__(self, result: _Optional[_Union[RetrievePowResponse, _Mapping]] = ...) -> None: ...

View file

@ -0,0 +1,663 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
import dcache_py.dcache_pb2 as dcache__pb2
class DcacheServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.AddCaptcha = channel.unary_unary(
'/dcache.DcacheService/AddCaptcha',
request_serializer=dcache__pb2.AddCaptchaRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.AddVisitor = channel.unary_unary(
'/dcache.DcacheService/AddVisitor',
request_serializer=dcache__pb2.CaptchaID.SerializeToString,
response_deserializer=dcache__pb2.OptionAddVisitorResult.FromString,
)
self.RenameCaptcha = channel.unary_unary(
'/dcache.DcacheService/RenameCaptcha',
request_serializer=dcache__pb2.RenameCaptchaRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.RemoveCaptcha = channel.unary_unary(
'/dcache.DcacheService/RemoveCaptcha',
request_serializer=dcache__pb2.CaptchaID.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.CachePow = channel.unary_unary(
'/dcache.DcacheService/CachePow',
request_serializer=dcache__pb2.CachePowRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.RetrievePow = channel.unary_unary(
'/dcache.DcacheService/RetrievePow',
request_serializer=dcache__pb2.RetrievePowRequest.SerializeToString,
response_deserializer=dcache__pb2.OptionalRetrievePoWResponse.FromString,
)
self.DeletePow = channel.unary_unary(
'/dcache.DcacheService/DeletePow',
request_serializer=dcache__pb2.DeletePowRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.CacheResult = channel.unary_unary(
'/dcache.DcacheService/CacheResult',
request_serializer=dcache__pb2.CacheResultRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.VerifyCaptchaResult = channel.unary_unary(
'/dcache.DcacheService/VerifyCaptchaResult',
request_serializer=dcache__pb2.RetrievePowRequest.SerializeToString,
response_deserializer=dcache__pb2.CaptchaResultVerified.FromString,
)
self.DeleteCaptchaResult = channel.unary_unary(
'/dcache.DcacheService/DeleteCaptchaResult',
request_serializer=dcache__pb2.DeleteCaptchaResultRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.CaptchaExists = channel.unary_unary(
'/dcache.DcacheService/CaptchaExists',
request_serializer=dcache__pb2.CaptchaID.SerializeToString,
response_deserializer=dcache__pb2.CaptchaExistsResponse.FromString,
)
self.GetVisitorCount = channel.unary_unary(
'/dcache.DcacheService/GetVisitorCount',
request_serializer=dcache__pb2.CaptchaID.SerializeToString,
response_deserializer=dcache__pb2.OptionGetVisitorCountResponse.FromString,
)
self.PipelineDcacheOps = channel.unary_unary(
'/dcache.DcacheService/PipelineDcacheOps',
request_serializer=dcache__pb2.DcacheBatchRequest.SerializeToString,
response_deserializer=dcache__pb2.DcacheBatchResponse.FromString,
)
self.AddLearner = channel.unary_unary(
'/dcache.DcacheService/AddLearner',
request_serializer=dcache__pb2.Learner.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.Write = channel.unary_unary(
'/dcache.DcacheService/Write',
request_serializer=dcache__pb2.RaftRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.Forward = channel.unary_unary(
'/dcache.DcacheService/Forward',
request_serializer=dcache__pb2.RaftRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.AppendEntries = channel.unary_unary(
'/dcache.DcacheService/AppendEntries',
request_serializer=dcache__pb2.RaftRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.InstallSnapshot = channel.unary_unary(
'/dcache.DcacheService/InstallSnapshot',
request_serializer=dcache__pb2.RaftRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
self.vote = channel.unary_unary(
'/dcache.DcacheService/vote',
request_serializer=dcache__pb2.RaftRequest.SerializeToString,
response_deserializer=dcache__pb2.RaftReply.FromString,
)
class DcacheServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def AddCaptcha(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def AddVisitor(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def RenameCaptcha(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def RemoveCaptcha(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CachePow(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def RetrievePow(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeletePow(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CacheResult(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def VerifyCaptchaResult(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteCaptchaResult(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CaptchaExists(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetVisitorCount(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def PipelineDcacheOps(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def AddLearner(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Write(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Forward(self, request, context):
"""/ Forward a request to other
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def AppendEntries(self, request, context):
"""raft RPC
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def InstallSnapshot(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def vote(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_DcacheServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'AddCaptcha': grpc.unary_unary_rpc_method_handler(
servicer.AddCaptcha,
request_deserializer=dcache__pb2.AddCaptchaRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'AddVisitor': grpc.unary_unary_rpc_method_handler(
servicer.AddVisitor,
request_deserializer=dcache__pb2.CaptchaID.FromString,
response_serializer=dcache__pb2.OptionAddVisitorResult.SerializeToString,
),
'RenameCaptcha': grpc.unary_unary_rpc_method_handler(
servicer.RenameCaptcha,
request_deserializer=dcache__pb2.RenameCaptchaRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'RemoveCaptcha': grpc.unary_unary_rpc_method_handler(
servicer.RemoveCaptcha,
request_deserializer=dcache__pb2.CaptchaID.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'CachePow': grpc.unary_unary_rpc_method_handler(
servicer.CachePow,
request_deserializer=dcache__pb2.CachePowRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'RetrievePow': grpc.unary_unary_rpc_method_handler(
servicer.RetrievePow,
request_deserializer=dcache__pb2.RetrievePowRequest.FromString,
response_serializer=dcache__pb2.OptionalRetrievePoWResponse.SerializeToString,
),
'DeletePow': grpc.unary_unary_rpc_method_handler(
servicer.DeletePow,
request_deserializer=dcache__pb2.DeletePowRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'CacheResult': grpc.unary_unary_rpc_method_handler(
servicer.CacheResult,
request_deserializer=dcache__pb2.CacheResultRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'VerifyCaptchaResult': grpc.unary_unary_rpc_method_handler(
servicer.VerifyCaptchaResult,
request_deserializer=dcache__pb2.RetrievePowRequest.FromString,
response_serializer=dcache__pb2.CaptchaResultVerified.SerializeToString,
),
'DeleteCaptchaResult': grpc.unary_unary_rpc_method_handler(
servicer.DeleteCaptchaResult,
request_deserializer=dcache__pb2.DeleteCaptchaResultRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'CaptchaExists': grpc.unary_unary_rpc_method_handler(
servicer.CaptchaExists,
request_deserializer=dcache__pb2.CaptchaID.FromString,
response_serializer=dcache__pb2.CaptchaExistsResponse.SerializeToString,
),
'GetVisitorCount': grpc.unary_unary_rpc_method_handler(
servicer.GetVisitorCount,
request_deserializer=dcache__pb2.CaptchaID.FromString,
response_serializer=dcache__pb2.OptionGetVisitorCountResponse.SerializeToString,
),
'PipelineDcacheOps': grpc.unary_unary_rpc_method_handler(
servicer.PipelineDcacheOps,
request_deserializer=dcache__pb2.DcacheBatchRequest.FromString,
response_serializer=dcache__pb2.DcacheBatchResponse.SerializeToString,
),
'AddLearner': grpc.unary_unary_rpc_method_handler(
servicer.AddLearner,
request_deserializer=dcache__pb2.Learner.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'Write': grpc.unary_unary_rpc_method_handler(
servicer.Write,
request_deserializer=dcache__pb2.RaftRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'Forward': grpc.unary_unary_rpc_method_handler(
servicer.Forward,
request_deserializer=dcache__pb2.RaftRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'AppendEntries': grpc.unary_unary_rpc_method_handler(
servicer.AppendEntries,
request_deserializer=dcache__pb2.RaftRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'InstallSnapshot': grpc.unary_unary_rpc_method_handler(
servicer.InstallSnapshot,
request_deserializer=dcache__pb2.RaftRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
'vote': grpc.unary_unary_rpc_method_handler(
servicer.vote,
request_deserializer=dcache__pb2.RaftRequest.FromString,
response_serializer=dcache__pb2.RaftReply.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'dcache.DcacheService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class DcacheService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def AddCaptcha(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/AddCaptcha',
dcache__pb2.AddCaptchaRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def AddVisitor(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/AddVisitor',
dcache__pb2.CaptchaID.SerializeToString,
dcache__pb2.OptionAddVisitorResult.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def RenameCaptcha(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/RenameCaptcha',
dcache__pb2.RenameCaptchaRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def RemoveCaptcha(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/RemoveCaptcha',
dcache__pb2.CaptchaID.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CachePow(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/CachePow',
dcache__pb2.CachePowRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def RetrievePow(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/RetrievePow',
dcache__pb2.RetrievePowRequest.SerializeToString,
dcache__pb2.OptionalRetrievePoWResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeletePow(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/DeletePow',
dcache__pb2.DeletePowRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CacheResult(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/CacheResult',
dcache__pb2.CacheResultRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def VerifyCaptchaResult(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/VerifyCaptchaResult',
dcache__pb2.RetrievePowRequest.SerializeToString,
dcache__pb2.CaptchaResultVerified.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteCaptchaResult(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/DeleteCaptchaResult',
dcache__pb2.DeleteCaptchaResultRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CaptchaExists(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/CaptchaExists',
dcache__pb2.CaptchaID.SerializeToString,
dcache__pb2.CaptchaExistsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetVisitorCount(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/GetVisitorCount',
dcache__pb2.CaptchaID.SerializeToString,
dcache__pb2.OptionGetVisitorCountResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def PipelineDcacheOps(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/PipelineDcacheOps',
dcache__pb2.DcacheBatchRequest.SerializeToString,
dcache__pb2.DcacheBatchResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def AddLearner(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/AddLearner',
dcache__pb2.Learner.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Write(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/Write',
dcache__pb2.RaftRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Forward(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/Forward',
dcache__pb2.RaftRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def AppendEntries(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/AppendEntries',
dcache__pb2.RaftRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def InstallSnapshot(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/InstallSnapshot',
dcache__pb2.RaftRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def vote(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dcache.DcacheService/vote',
dcache__pb2.RaftRequest.SerializeToString,
dcache__pb2.RaftReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)

8
launch.sh Executable file
View file

@ -0,0 +1,8 @@
#!/bin/bash
nohup ./target/release/main --id 1 --http-addr 127.0.0.1:9001 --introducer-addr 127.0.0.1:9001 --introducer-id 1 --cluster-size 3 &
sleep 1
nohup ./target/release/main --id 2 --http-addr 127.0.0.1:9002 --introducer-addr 127.0.0.1:9001 --introducer-id 1 --cluster-size 3 &
sleep 1
nohup ./target/release/main --id 3 --http-addr 127.0.0.1:9003 --introducer-addr 127.0.0.1:9001 --introducer-id 1 --cluster-size 3 &
read -p "Continue? (Y/N): " confirm && killall main

177
proto/dcache/dcache.proto Normal file
View file

@ -0,0 +1,177 @@
syntax = "proto3";
package dcache;
message Level {
uint32 visitor_threshold = 301;
uint32 difficulty_factor= 302;
}
message Defense {
repeated Level levels = 401;
}
message MCaptcha {
uint64 duration = 502;
Defense defense = 503;
}
message AddCaptchaRequest {
string id = 601;
MCaptcha mcaptcha = 602;
}
message RenameCaptchaRequest {
string name = 701;
string rename_to = 702;
}
message CachePowRequest {
string string= 801;
uint32 difficulty_factor = 802;
uint64 duration = 803;
string key = 804;
}
message CacheResultRequest {
string token = 817;
string key = 818;
uint64 duration= 819;
}
message DeleteCaptchaResultRequest {
string token = 821;
}
message CaptchaID{
string id = 1;
}
message PoID{
string id = 1;
}
message AddVisitorResult {
uint64 duration = 901;
uint32 difficulty_factor = 902;
}
message OptionAddVisitorResult {
optional AddVisitorResult result = 911;
}
message RaftRequest {
string data = 1;
}
message RaftReply {
string data = 1;
string error = 2;
}
message Learner {
uint64 id = 1;
string addr = 2;
}
message CaptchaExistsResponse {
bool exists = 1;
}
message GetVisitorCountResponse {
uint32 visitors = 1;
}
message OptionGetVisitorCountResponse {
optional GetVisitorCountResponse result = 1;
}
message DcacheRequest {
oneof DcacheRequest {
AddCaptchaRequest addCaptcha = 1;
CaptchaID addVisitor = 2;
RenameCaptchaRequest renameCaptcha = 3;
CaptchaID removeCaptcha = 4;
CachePowRequest cachePow = 5;
CacheResultRequest cacheResult = 6;
CaptchaID captchaExists = 7;
CaptchaID getVisitorCount = 8;
}
}
message DcacheResponse {
oneof DcacheResponse {
OptionAddVisitorResult option_add_visitor_result = 1;
RaftReply other = 2;
CaptchaExistsResponse captcha_exists = 3;
OptionGetVisitorCountResponse get_visitor_count = 4;
}
}
message DcacheBatchRequest {
repeated DcacheRequest requests = 1;
}
message DcacheBatchResponse {
repeated DcacheResponse responses = 1;
}
message RetrievePowRequest {
string token = 1;
string key = 2;
}
message RetrievePowResponse {
uint32 difficulty_factor = 1;
uint64 duration = 2;
string key = 3;
}
message CaptchaResultVerified {
bool verified = 1;
}
message DeletePowRequest {
string string = 1;
}
message OptionalRetrievePoWResponse {
optional RetrievePowResponse result = 1;
}
service DcacheService {
rpc AddCaptcha(AddCaptchaRequest) returns (RaftReply) {}
rpc AddVisitor(CaptchaID) returns (OptionAddVisitorResult) {}
rpc RenameCaptcha(RenameCaptchaRequest) returns (RaftReply) {}
rpc RemoveCaptcha(CaptchaID) returns (RaftReply) {}
rpc CachePow(CachePowRequest) returns (RaftReply) {}
rpc RetrievePow(RetrievePowRequest) returns (OptionalRetrievePoWResponse) {}
rpc DeletePow(DeletePowRequest) returns (RaftReply) {}
rpc CacheResult(CacheResultRequest) returns (RaftReply) {}
rpc VerifyCaptchaResult(RetrievePowRequest) returns (CaptchaResultVerified) {}
rpc DeleteCaptchaResult(DeleteCaptchaResultRequest) returns (RaftReply) {}
rpc CaptchaExists(CaptchaID) returns (CaptchaExistsResponse) {}
rpc GetVisitorCount(CaptchaID) returns (OptionGetVisitorCountResponse) {}
rpc PipelineDcacheOps(DcacheBatchRequest) returns (DcacheBatchResponse) {}
rpc AddLearner(Learner) returns (RaftReply) {}
rpc Write(RaftRequest) returns (RaftReply) {}
/// Forward a request to other
rpc Forward(RaftRequest) returns (RaftReply) {}
// raft RPC
rpc AppendEntries(RaftRequest) returns (RaftReply);
rpc InstallSnapshot(RaftRequest) returns (RaftReply);
rpc vote(RaftRequest) returns (RaftReply);
}

21
renovate.json Normal file
View file

@ -0,0 +1,21 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended",
":dependencyDashboard"
],
"labels": [
"renovate-bot"
],
"prHourlyLimit": 0,
"timezone": "Asia/kolkata",
"prCreation": "immediate",
"vulnerabilityAlerts": {
"enabled": true,
"labels": [
"renovate-bot",
"renovate-security",
"security"
]
}
}

32
requirements.txt Normal file
View file

@ -0,0 +1,32 @@
asyncio==3.4.3
blinker==1.8.2
Brotli==1.1.0
certifi==2024.6.2
charset-normalizer==3.3.2
click==8.1.7
ConfigArgParse==1.7
Flask==3.0.3
Flask-BasicAuth==0.2.0
Flask-Cors==4.0.1
gevent==24.2.1
geventhttpclient==2.3.1
greenlet==3.0.3
grpc-interceptor==0.15.4
grpcio==1.64.1
grpcio-tools==1.60.0
idna==3.7
itsdangerous==2.2.0
Jinja2==3.1.4
locust==2.29.0
MarkupSafe==2.1.5
msgpack==1.0.8
protobuf==4.25.3
psutil==5.9.8
pyzmq==26.0.3
requests==2.32.3
roundrobin==0.0.4
six==1.16.0
urllib3==2.2.1
Werkzeug==3.0.3
zope.event==5.0
zope.interface==6.4.post2

View file

@ -17,7 +17,6 @@
*/
use std::collections::BTreeMap;
use std::sync::Arc;
use std::time::Duration;
use openraft::error::RaftError;
use openraft::BasicNode;

View file

@ -16,11 +16,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use clap::Parser;
use dcache::network::raft_network_impl::DcacheNetwork;
use dcache::start_example_raft_node;
use dcache::store::DcacheStore;
use dcache::DcacheTypeConfig;
use openraft::Raft;
use tracing_subscriber::EnvFilter;
//pub type DcacheRaft = Raft<DcacheTypeConfig, DcacheNetwork, DcacheStore>;
@ -44,7 +40,7 @@ pub struct Opt {
pub cluster_size: usize,
}
#[actix_web::main]
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
// Setup the logger
tracing_subscriber::fmt()

View file

@ -20,27 +20,26 @@
use std::io::Cursor;
use std::sync::Arc;
use actix_web::middleware;
use actix_web::middleware::Logger;
use actix_web::web::Data;
use actix_web::App;
use actix_web::HttpServer;
use openraft::storage::Adaptor;
use openraft::BasicNode;
use openraft::Config;
use openraft::Raft;
use tonic::transport::Server;
use crate::app::DcacheApp;
use crate::network::api;
use crate::network::management;
use crate::network::raft;
use crate::network::raft_network_impl::DcacheNetwork;
use crate::protobuf::dcache::dcache_service_client::DcacheServiceClient;
use crate::protobuf::dcache::dcache_service_server::DcacheServiceServer;
use crate::protobuf::dcache::Learner;
use crate::store::DcacheRequest;
use crate::store::DcacheResponse;
use crate::store::DcacheStore;
pub mod app;
mod mcaptcha;
pub mod network;
mod pool;
mod protobuf;
pub mod store;
pub type DcacheNodeId = u64;
@ -98,7 +97,6 @@ pub async fn start_example_raft_node(
let store = Arc::new(DcacheStore::new(salt));
let (log_store, state_machine) = Adaptor::new(store.clone());
let client = reqwest::Client::new();
// Create the network layer that will connect and communicate the raft instances and
// will be used in conjunction with the store created above.
@ -106,7 +104,7 @@ pub async fn start_example_raft_node(
let (manager_tx, manager_rx) = tokio::sync::mpsc::channel(1000);
// let health = Arc::new(crate::network::raft_network_impl::HealthLedger::new(manager_tx));
// let network = Arc::new(DcacheNetwork::new(health));
let network = Arc::new(DcacheNetwork::new(manager_tx, client.clone()));
let network = Arc::new(DcacheNetwork::new(manager_tx));
// Create a local raft instance.
let raft = Raft::new(
@ -120,67 +118,50 @@ pub async fn start_example_raft_node(
.unwrap();
raft.enable_heartbeat(true);
raft.enable_elect(true);
// raft.enable_tick(true);
// Create an application that will store all the instances created above, this will
// be later used on the actix-web services.
let app = Data::new(DcacheApp {
let app = DcacheApp {
id: node_id,
addr: http_addr.clone(),
raft,
store,
config,
network,
});
};
let app = Arc::new(app);
let dcache_service = protobuf::MyDcacheImpl::new(app.clone());
if introducer_addr == http_addr {
app.init().await.unwrap();
}
let app_copy = app.clone();
// Start the actix-web server.
let server = HttpServer::new(move || {
App::new()
.wrap(Logger::default())
.wrap(Logger::new("%a %{User-Agent}i"))
.wrap(middleware::Compress::default())
.app_data(app.clone())
// raft internal RPC
.service(raft::append)
.service(raft::snapshot)
.service(raft::vote)
// admin API
.service(management::init)
.service(management::add_learner)
.service(management::change_membership)
.service(management::metrics)
// application API
.service(api::write)
.service(api::state)
.service(api::read)
.service(api::pipeline_read)
.service(api::pipeline_write)
// .service(api::consistent_read)
});
let x = server.bind(&http_addr)?;
let svc = DcacheServiceServer::new(dcache_service);
let x = Server::builder()
.add_service(svc)
.serve(http_addr.clone().parse().unwrap());
let server_fut = tokio::spawn(x);
let server_fut = tokio::spawn(x.run());
tokio::time::sleep(std::time::Duration::new(3, 0)).await;
let req: (DcacheNodeId, String) = (node_id, http_addr);
let c = reqwest::Client::new();
c.post(format!("http://{}/add-learner", introducer_addr))
.json(&req)
.send()
let url = format!("http://{}", introducer_addr);
let mut client = DcacheServiceClient::connect(url).await.unwrap();
client
.add_learner(Learner {
id: node_id,
addr: http_addr,
})
.await
.unwrap();
// let health_job = tokio::spawn(DcacheApp::health_job(app_copy));
let health_metrics_handle =
crate::network::management::HealthMetrics::spawn(app_copy, 5, manager_rx).await;
server_fut.await??;
server_fut.await?.unwrap();
health_metrics_handle.abort();
// health_job.abort();
Ok(())
}

330
src/mcaptcha/cache.rs Normal file
View file

@ -0,0 +1,330 @@
/*
* mCaptcha - A proof of work based DoS protection system
* Copyright © 2021 Aravinth Manivannan <realravinth@batsense.net>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
//! In-memory cache implementation that uses [HashMap]
use std::sync::Arc;
use std::time::Duration;
use dashmap::DashMap;
use serde::{Deserialize, Serialize};
use libmcaptcha::cache::messages::*;
use libmcaptcha::errors::*;
#[derive(Clone, Default, Serialize, Deserialize)]
/// cache datastructure implementing [Save]
pub struct HashCache {
difficulty_map: Arc<DashMap<String, CachedPoWConfig>>,
result_map: Arc<DashMap<String, (String, u64)>>,
}
impl HashCache {
// save [PoWConfig] to cache
fn save_pow_config(&self, config: CachePoW) -> CaptchaResult<()> {
let challenge = config.string;
let config: CachedPoWConfig = CachedPoWConfig {
key: config.key,
difficulty_factor: config.difficulty_factor,
duration: config.duration,
};
if self.difficulty_map.get(&challenge).is_none() {
self.difficulty_map.insert(challenge, config);
Ok(())
} else {
Err(CaptchaError::InvalidPoW)
}
}
pub async fn clean_all_after_cold_start(&self, updated: HashCache) {
updated.difficulty_map.iter().for_each(|x| {
self.difficulty_map
.insert(x.key().to_owned(), x.value().to_owned());
});
updated.result_map.iter().for_each(|x| {
self.result_map
.insert(x.key().to_owned(), x.value().to_owned());
});
let cache = self.clone();
let fut = async move {
for values in cache.result_map.iter() {
let inner_cache = cache.clone();
let duration = values.value().1;
let key = values.key().to_owned();
let inner_fut = async move {
tokio::time::sleep(Duration::new(duration, 0)).await;
inner_cache.remove_cache_result(&key);
};
tokio::spawn(inner_fut);
}
for values in cache.difficulty_map.iter() {
let inner_cache = cache.clone();
let duration = values.value().duration;
let key = values.key().to_owned();
let inner_fut = async move {
tokio::time::sleep(Duration::new(duration, 0)).await;
inner_cache.remove_pow_config(&key);
};
tokio::spawn(inner_fut);
}
};
tokio::spawn(fut);
}
// retrieve [PoWConfig] from cache. Deletes config post retrival
pub fn retrieve_pow_config(&self, msg: VerifyCaptchaResult) -> Option<CachedPoWConfig> {
if let Some(difficulty_factor) = self.remove_pow_config(&msg.token) {
Some(difficulty_factor)
} else {
None
}
}
// delete [PoWConfig] from cache
pub fn remove_pow_config(&self, string: &str) -> Option<CachedPoWConfig> {
self.difficulty_map.remove(string).map(|x| x.1)
}
// save captcha result
fn save_captcha_result(&self, res: CacheResult) {
self.result_map.insert(res.token, (res.key, res.duration));
}
// verify captcha result
pub fn verify_captcha_result(&self, challenge: VerifyCaptchaResult) -> bool {
if let Some(captcha_id) = self.remove_cache_result(&challenge.token) {
if captcha_id == challenge.key {
true
} else {
false
}
} else {
false
}
}
// delete cache result
pub fn remove_cache_result(&self, string: &str) -> Option<String> {
self.result_map.remove(string).map(|x| x.1 .0)
}
pub fn cache_pow(&self, msg: CachePoW) {
use std::time::Duration;
use tokio::time::sleep;
let duration: Duration = Duration::new(msg.duration, 0);
let string = msg.string.clone();
let cache = self.clone();
let wait_for = async move {
sleep(duration).await;
//delay_for(duration).await;
cache.remove_pow_config(&string);
};
let _ = self.save_pow_config(msg);
tokio::spawn(wait_for);
}
/// cache PoW result
pub fn cache_result(&self, msg: CacheResult) {
use std::time::Duration;
use tokio::time::sleep;
let token = msg.token.clone();
msg.token.clone();
msg.token.clone();
msg.token.clone();
let duration: Duration = Duration::new(msg.duration, 0);
let cache = self.clone();
let wait_for = async move {
sleep(duration).await;
//delay_for(duration).await;
cache.remove_cache_result(&token);
};
tokio::spawn(wait_for);
let _ = self.save_captcha_result(msg);
}
}
#[cfg(test)]
mod tests {
use super::*;
use libmcaptcha::master::AddVisitorResult;
use libmcaptcha::pow::PoWConfig;
use std::time::Duration;
#[actix_rt::test]
async fn merge_works() {
const DIFFICULTY_FACTOR: u32 = 54;
const RES: &str = "b";
const DURATION: u64 = 5;
const KEY: &str = "mcaptchakey";
let pow: PoWConfig = PoWConfig::new(DIFFICULTY_FACTOR, KEY.into()); //salt is dummy here
let cache = HashCache::default();
let new_cache = HashCache::default();
let visitor_result = AddVisitorResult {
difficulty_factor: DIFFICULTY_FACTOR,
duration: DURATION,
};
let string = pow.string.clone();
let msg = CachePoWBuilder::default()
.string(pow.string.clone())
.difficulty_factor(DIFFICULTY_FACTOR)
.duration(visitor_result.duration)
.key(KEY.into())
.build()
.unwrap();
cache.cache_pow(msg);
let add_cache = CacheResult {
key: KEY.into(),
token: RES.into(),
duration: DURATION,
};
cache.cache_result(add_cache.clone());
new_cache.clean_all_after_cold_start(cache.clone()).await;
let msg = VerifyCaptchaResult {
token: string.clone(),
key: KEY.into(),
};
let cache_difficulty_factor = cache.retrieve_pow_config(msg.clone()).unwrap();
let new_cache_difficulty_factor = new_cache.retrieve_pow_config(msg.clone()).unwrap();
assert_eq!(DIFFICULTY_FACTOR, cache_difficulty_factor.difficulty_factor);
assert_eq!(
DIFFICULTY_FACTOR,
new_cache_difficulty_factor.difficulty_factor
);
let verify_msg = VerifyCaptchaResult {
key: KEY.into(),
token: RES.into(),
};
assert!(new_cache.verify_captcha_result(verify_msg.clone()));
assert!(!new_cache.verify_captcha_result(verify_msg.clone()));
let duration: Duration = Duration::new(5, 0);
//sleep(DURATION + DURATION).await;
tokio::time::sleep(duration + duration).await;
let expired_string = cache.retrieve_pow_config(msg.clone());
assert_eq!(None, expired_string);
let expired_string = new_cache.retrieve_pow_config(msg);
assert_eq!(None, expired_string);
cache.cache_result(add_cache);
new_cache.clean_all_after_cold_start(cache.clone()).await;
tokio::time::sleep(duration + duration).await;
assert!(!new_cache.verify_captcha_result(verify_msg.clone()));
assert!(!cache.verify_captcha_result(verify_msg));
}
#[actix_rt::test]
async fn hashcache_pow_cache_works() {
const DIFFICULTY_FACTOR: u32 = 54;
const DURATION: u64 = 5;
const KEY: &str = "mcaptchakey";
let cache = HashCache::default();
let pow: PoWConfig = PoWConfig::new(DIFFICULTY_FACTOR, KEY.into()); //salt is dummy here
let visitor_result = AddVisitorResult {
difficulty_factor: DIFFICULTY_FACTOR,
duration: DURATION,
};
let string = pow.string.clone();
let msg = CachePoWBuilder::default()
.string(pow.string.clone())
.difficulty_factor(DIFFICULTY_FACTOR)
.duration(visitor_result.duration)
.key(KEY.into())
.build()
.unwrap();
cache.cache_pow(msg);
let msg = VerifyCaptchaResult {
token: string.clone(),
key: KEY.into(),
};
let cache_difficulty_factor = cache.retrieve_pow_config(msg.clone()).unwrap();
assert_eq!(DIFFICULTY_FACTOR, cache_difficulty_factor.difficulty_factor);
let duration: Duration = Duration::new(5, 0);
//sleep(DURATION + DURATION).await;
tokio::time::sleep(duration + duration).await;
let expired_string = cache.retrieve_pow_config(msg);
assert_eq!(None, expired_string);
}
#[actix_rt::test]
async fn hashcache_result_cache_works() {
const DURATION: u64 = 5;
const KEY: &str = "a";
const RES: &str = "b";
let cache = HashCache::default();
// send value to cache
// send another value to cache for auto delete
// verify_captcha_result
// delete
// wait for timeout and verify_captcha_result against second value
let add_cache = CacheResult {
key: KEY.into(),
token: RES.into(),
duration: DURATION,
};
cache.cache_result(add_cache);
let verify_msg = VerifyCaptchaResult {
key: KEY.into(),
token: RES.into(),
};
assert!(cache.verify_captcha_result(verify_msg.clone()));
// duplicate
assert!(!cache.verify_captcha_result(verify_msg));
let verify_msg = VerifyCaptchaResult {
key: "cz".into(),
token: RES.into(),
};
assert!(!cache.verify_captcha_result(verify_msg));
let duration: Duration = Duration::new(5, 0);
tokio::time::sleep(duration + duration).await;
let verify_msg = VerifyCaptchaResult {
key: KEY.into(),
token: RES.into(),
};
assert!(!cache.verify_captcha_result(verify_msg));
}
}

398
src/mcaptcha/defense.rs Normal file
View file

@ -0,0 +1,398 @@
/*
* mCaptcha - A proof of work based DoS protection system
* Copyright © 2021 Aravinth Manivannan <realravinth@batsense.net>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use serde::{Deserialize, Serialize};
use libmcaptcha::defense::Level;
use libmcaptcha::errors::*;
//
///// Level struct that describes threshold-difficulty factor mapping
//#[derive(Debug, Deserialize, Serialize, Copy, Clone, PartialEq)]
//pub struct Level {
// pub visitor_threshold: u32,
// pub difficulty_factor: u32,
//}
//
///// Bulder struct for [Level] to describe threshold-difficulty factor mapping
//#[derive(Debug, Copy, Clone, PartialEq)]
//pub struct LevelBuilder {
// visitor_threshold: Option<u32>,
// difficulty_factor: Option<u32>,
//}
//
//impl Default for LevelBuilder {
// fn default() -> Self {
// LevelBuilder {
// visitor_threshold: None,
// difficulty_factor: None,
// }
// }
//}
//
//impl LevelBuilder {
// /// set visitor count for level
// pub fn visitor_threshold(&mut self, visitor_threshold: u32) -> &mut Self {
// self.visitor_threshold = Some(visitor_threshold);
// self
// }
//
// /// set difficulty factor for level. difficulty_factor can't be zero because
// /// Difficulty is calculated as:
// /// ```no_run
// /// let difficulty_factor = 500;
// /// let difficulty = u128::max_value() - u128::max_value() / difficulty_factor;
// /// ```
// /// the higher the `difficulty_factor`, the higher the difficulty.
// pub fn difficulty_factor(&mut self, difficulty_factor: u32) -> CaptchaResult<&mut Self> {
// if difficulty_factor > 0 {
// self.difficulty_factor = Some(difficulty_factor);
// Ok(self)
// } else {
// Err(CaptchaError::DifficultyFactorZero)
// }
// }
//
// /// build Level struct
// pub fn build(&mut self) -> CaptchaResult<Level> {
// if self.visitor_threshold.is_none() {
// Err(CaptchaError::SetVisitorThreshold)
// } else if self.difficulty_factor.is_none() {
// Err(CaptchaError::SetDifficultyFactor)
// } else {
// Ok(Level {
// difficulty_factor: self.difficulty_factor.unwrap(),
// visitor_threshold: self.visitor_threshold.unwrap(),
// })
// }
// }
//}
//
/// Builder struct for [Defense]
#[derive(Debug, Clone, PartialEq)]
pub struct DefenseBuilder {
levels: Vec<Level>,
}
impl Default for DefenseBuilder {
fn default() -> Self {
DefenseBuilder { levels: vec![] }
}
}
impl DefenseBuilder {
/// add a level to [Defense]
pub fn add_level(&mut self, level: Level) -> CaptchaResult<&mut Self> {
for i in self.levels.iter() {
if i.visitor_threshold == level.visitor_threshold {
return Err(CaptchaError::DuplicateVisitorCount);
}
}
self.levels.push(level);
Ok(self)
}
/// Build [Defense]
pub fn build(&mut self) -> CaptchaResult<Defense> {
if !self.levels.is_empty() {
// sort levels to arrange in ascending order
self.levels.sort_by_key(|a| a.visitor_threshold);
for level in self.levels.iter() {
if level.difficulty_factor == 0 {
return Err(CaptchaError::DifficultyFactorZero);
}
}
// as visitor count increases, difficulty_factor too should increse
// if it decreses, an error must be thrown
for i in 0..self.levels.len() - 1 {
if self.levels[i].difficulty_factor > self.levels[i + 1].difficulty_factor {
return Err(CaptchaError::DecreaseingDifficultyFactor);
}
}
Ok(Defense {
levels: self.levels.to_owned(),
})
} else {
Err(CaptchaError::LevelEmpty)
}
}
}
/// struct describes all the different [Level]s at which an mCaptcha system operates
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
pub struct Defense {
levels: Vec<Level>,
// index of current visitor threshold
}
impl From<Defense> for Vec<Level> {
fn from(d: Defense) -> Self {
d.levels
}
}
impl Defense {
///! Difficulty is calculated as:
///! ```rust
///! let difficulty = u128::max_value() - u128::max_value() / difficulty_factor;
///! ```
///! The higher the `difficulty_factor`, the higher the difficulty.
// /// Get difficulty factor of current level of defense
// pub fn get_difficulty(&self, current_visitor_threshold: usize) -> u32 {
// self.levels[current_visitor_threshold].difficulty_factor
// }
//
// /// tighten up defense. Increases defense level by a factor of one.
// /// When defense is at max level, calling this method will have no effect
// pub fn tighten_up(&mut self) {
// if self.current_visitor_threshold < self.levels.len() - 1 {
// self.current_visitor_threshold += 1;
// }
// }
// /// Loosen up defense. Decreases defense level by a factor of one.
// /// When defense is at the lowest level, calling this method will have no effect.
// pub fn loosen_up(&mut self) {
// if self.current_visitor_threshold > 0 {
// self.current_visitor_threshold -= 1;
// }
// }
//
// /// Set defense to maximum level
// pub fn max_defense(&mut self) {
// self.current_visitor_threshold = self.levels.len() - 1;
// }
//
// /// Set defense to minimum level
// pub fn min_defense(&mut self) {
// self.current_visitor_threshold = 0;
// }
//
pub fn get_levels(&self) -> Vec<Level> {
self.levels.clone()
}
/// Get current level's visitor threshold
pub fn current_level(&self, current_visitor_level: u32) -> &Level {
for level in self.levels.iter() {
if current_visitor_level <= level.visitor_threshold {
return level;
}
}
self.levels.last().as_ref().unwrap()
// &self.levels[self.current_visitor_threshold]
}
//
// /// Get current level's visitor threshold
// pub fn visitor_threshold(&self) -> u32 {
// self.levels[self.current_visitor_threshold].difficulty_factor
// }
}
#[cfg(test)]
mod tests {
use super::*;
use libmcaptcha::defense::Level;
use libmcaptcha::LevelBuilder;
#[test]
fn defense_builder_duplicate_visitor_threshold() {
let mut defense_builder = DefenseBuilder::default();
let err = defense_builder
.add_level(
LevelBuilder::default()
.visitor_threshold(50)
.difficulty_factor(50)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(50)
.difficulty_factor(50)
.unwrap()
.build()
.unwrap(),
);
assert_eq!(err, Err(CaptchaError::DuplicateVisitorCount));
}
#[test]
fn defense_builder_decreasing_difficulty_factor() {
let mut defense_builder = DefenseBuilder::default();
let err = defense_builder
.add_level(
LevelBuilder::default()
.visitor_threshold(50)
.difficulty_factor(50)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(500)
.difficulty_factor(10)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.build();
assert_eq!(err, Err(CaptchaError::DecreaseingDifficultyFactor));
}
#[test]
fn checking_for_integer_overflow() {
let mut defense = DefenseBuilder::default()
.add_level(
LevelBuilder::default()
.visitor_threshold(5)
.difficulty_factor(5)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(10)
.difficulty_factor(50)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(20)
.difficulty_factor(60)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(30)
.difficulty_factor(65)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.build()
.unwrap();
// for _ in 0..500 {
// defense.tighten_up();
// }
//
// defense.get_difficulty();
// for _ in 0..500000 {
// defense.tighten_up();
// }
//
defense.current_level(10_000_000);
}
fn get_defense() -> Defense {
DefenseBuilder::default()
.add_level(
LevelBuilder::default()
.visitor_threshold(50)
.difficulty_factor(50)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(500)
.difficulty_factor(5000)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(5000)
.difficulty_factor(50000)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(50000)
.difficulty_factor(500000)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(500000)
.difficulty_factor(5000000)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.build()
.unwrap()
}
#[test]
fn defense_builder_works() {
let defense = get_defense();
assert_eq!(defense.levels[0].difficulty_factor, 50);
assert_eq!(defense.levels[1].difficulty_factor, 5000);
assert_eq!(defense.levels[2].difficulty_factor, 50_000);
assert_eq!(defense.levels[3].difficulty_factor, 500_000);
assert_eq!(defense.levels[4].difficulty_factor, 5_000_000);
}
#[test]
fn tighten_up_works() {
let defense = get_defense();
assert_eq!(defense.current_level(0).difficulty_factor, 50);
assert_eq!(defense.current_level(500).difficulty_factor, 5_000);
assert_eq!(defense.current_level(501).difficulty_factor, 50_000);
assert_eq!(defense.current_level(5_000).difficulty_factor, 50_000);
assert_eq!(defense.current_level(5_001).difficulty_factor, 500_000);
assert_eq!(defense.current_level(50_000).difficulty_factor, 500_000);
assert_eq!(defense.current_level(50_001).difficulty_factor, 5_000_000);
assert_eq!(defense.current_level(500_000).difficulty_factor, 5_000_000);
assert_eq!(defense.current_level(500_001).difficulty_factor, 5_000_000);
}
}

595
src/mcaptcha/mcaptcha.rs Normal file
View file

@ -0,0 +1,595 @@
/* mCaptcha - A proof of work based DoS protection system
* Copyright © 2021 Aravinth Manivannan <realravinth@batsense.net>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use std::collections::HashMap;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::Arc;
use std::time::Duration;
use dashmap::DashMap;
use serde::{Deserialize, Serialize};
use super::defense::Defense;
use libmcaptcha::errors::*;
use libmcaptcha::master::messages as ManagerMessages;
/// Builder for [MCaptcha]
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct MCaptchaBuilder {
visitor_threshold: u32,
defense: Option<Defense>,
duration: Option<u64>,
}
impl Default for MCaptchaBuilder {
fn default() -> Self {
MCaptchaBuilder {
visitor_threshold: 0,
defense: None,
duration: None,
}
}
}
impl MCaptchaBuilder {
/// set defense
pub fn defense(&mut self, d: Defense) -> &mut Self {
self.defense = Some(d);
self
}
/// set duration
pub fn duration(&mut self, d: u64) -> &mut Self {
self.duration = Some(d);
self
}
/// Builds new [MCaptcha]
pub fn build(self: &mut MCaptchaBuilder) -> CaptchaResult<MCaptcha> {
if self.duration.is_none() {
Err(CaptchaError::PleaseSetValue("duration".into()))
} else if self.defense.is_none() {
Err(CaptchaError::PleaseSetValue("defense".into()))
} else if self.duration <= Some(0) {
Err(CaptchaError::CaptchaDurationZero)
} else {
let m = MCaptcha {
duration: self.duration.unwrap(),
defense: self.defense.clone().unwrap(),
visitor_threshold: Arc::new(AtomicU32::new(self.visitor_threshold)),
};
Ok(m)
}
}
}
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct MCaptcha {
visitor_threshold: Arc<AtomicU32>,
defense: Defense,
duration: u64,
}
impl MCaptcha {
/// increments the visitor count by one
#[inline]
pub fn add_visitor(&self) -> u32 {
// self.visitor_threshold += 1;
let current_visitor_level = self.visitor_threshold.fetch_add(1, Ordering::SeqCst) + 1;
let current_level = self.defense.current_level(current_visitor_level);
current_level.difficulty_factor
}
/// decrements the visitor count by specified count
#[inline]
pub fn set_visitor_count(&self, new_current: u32) {
self.visitor_threshold
.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |mut current| {
if current != new_current {
Some(new_current)
} else {
None
}
});
}
/// decrements the visitor count by specified count
#[inline]
pub fn decrement_visitor_by(&self, count: u32) {
self.visitor_threshold
.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |mut current| {
if current > 0 {
if current >= count {
current -= count;
} else {
current = 0;
}
Some(current)
} else {
None
}
});
}
/// get [Counter]'s current visitor_threshold
pub fn get_visitors(&self) -> u32 {
self.visitor_threshold.load(Ordering::SeqCst)
}
}
#[derive(Clone, Serialize, Deserialize)]
pub struct Manager {
pub captchas: Arc<DashMap<String, Arc<MCaptcha>>>,
pub gc: u64,
}
impl Manager {
/// add [Counter] actor to [Manager]
pub fn add_captcha(&self, m: Arc<MCaptcha>, id: String) {
self.captchas.insert(id, m);
}
/// create new master
/// accepts a `u64` to configure garbage collection period
pub fn new(gc: u64) -> Self {
Manager {
captchas: Arc::new(DashMap::new()),
gc,
}
}
fn gc(captchas: Arc<DashMap<String, Arc<MCaptcha>>>) {
for captcha in captchas.iter() {
let visitor = { captcha.value().get_visitors() };
if visitor == 0 {
captchas.remove(captcha.key());
}
}
}
/// get [Counter] actor from [Manager]
pub fn get_captcha(&self, id: &str) -> Option<Arc<MCaptcha>> {
if let Some(captcha) = self.captchas.get(id) {
Some(captcha.clone())
} else {
None
}
}
/// removes [Counter] actor from [Manager]
pub fn rm_captcha(&self, id: &str) -> Option<(String, Arc<MCaptcha>)> {
self.captchas.remove(id)
}
/// renames [Counter] actor
pub fn rename(&self, current_id: &str, new_id: String) {
// If actor isn't present, it's okay to not throw an error
// since actors are lazyily initialized and are cleaned up when inactive
if let Some((_, captcha)) = self.captchas.remove(current_id) {
self.add_captcha(captcha, new_id);
}
}
pub async fn clean_all_after_cold_start(&self, updated: Manager) {
updated.captchas.iter().for_each(|x| {
self.captchas
.insert(x.key().to_owned(), x.value().to_owned());
});
let captchas = self.clone();
let keys: Vec<String> = captchas
.captchas
.clone()
.iter()
.map(|x| x.key().to_owned())
.collect();
let fut = async move {
tokio::time::sleep(Duration::new(captchas.gc, 0)).await;
for key in keys.iter() {
captchas.rm_captcha(key);
}
};
tokio::spawn(fut);
}
pub fn add_visitor(
&self,
msg: &ManagerMessages::AddVisitor,
) -> Option<libmcaptcha::master::AddVisitorResult> {
if let Some(captcha) = self.captchas.get(&msg.0) {
let difficulty_factor = captcha.add_visitor();
// let id = msg.0.clone();
let c = captcha.clone();
let captchas = self.captchas.clone();
let fut = async move {
tokio::time::sleep(Duration::new(c.duration, 0)).await;
c.decrement_visitor_by(1);
// Self::gc(captchas);
// if c.get_visitors() == 0 {
// println!("Removing captcha addvivi");
// captchas.remove(&id);
// }
};
tokio::spawn(fut);
Some(libmcaptcha::master::AddVisitorResult {
duration: captcha.duration,
difficulty_factor,
})
} else {
None
}
}
pub fn get_internal_data(&self) -> HashMap<String, libmcaptcha::mcaptcha::MCaptcha> {
let mut res = HashMap::with_capacity(self.captchas.len());
for value in self.captchas.iter() {
res.insert(value.key().to_owned(), value.value().as_ref().into());
}
res
}
pub fn set_internal_data(&self, mut map: HashMap<String, libmcaptcha::mcaptcha::MCaptcha>) {
for (id, captcha) in map.drain() {
let visitors = captcha.get_visitors();
let new_captcha: MCaptcha = (&captcha).into();
let new_captcha = Arc::new(new_captcha);
self.captchas.insert(id.clone(), new_captcha.clone());
let msg = ManagerMessages::AddVisitor(id);
for _ in 0..visitors {
self.add_visitor(&msg);
}
}
}
}
impl From<&libmcaptcha::mcaptcha::MCaptcha> for MCaptcha {
fn from(value: &libmcaptcha::mcaptcha::MCaptcha) -> Self {
let mut defense = super::defense::DefenseBuilder::default();
for level in value.get_defense().get_levels() {
let _ = defense.add_level(level);
}
let defense = defense.build().unwrap();
let new_captcha = MCaptchaBuilder::default()
.defense(defense)
.duration(value.get_duration())
.build()
.unwrap();
// for _ in 0..value.get_visitors() {
// new_captcha.add_visitor();
// }
new_captcha
}
}
impl From<&MCaptcha> for libmcaptcha::mcaptcha::MCaptcha {
fn from(value: &MCaptcha) -> Self {
let mut defense = libmcaptcha::defense::DefenseBuilder::default();
for level in value.defense.get_levels().drain(0..) {
let _ = defense.add_level(level);
}
let defense = defense.build().unwrap();
let mut new_captcha = libmcaptcha::mcaptcha::MCaptchaBuilder::default()
.defense(defense)
.duration(value.duration)
.build()
.unwrap();
for _ in 0..value.get_visitors() {
new_captcha.add_visitor();
}
new_captcha
}
}
#[cfg(test)]
mod tests {
use super::*;
use libmcaptcha::defense::LevelBuilder;
use libmcaptcha::master::messages::*;
pub const LEVEL_1: (u32, u32) = (50, 50);
pub const LEVEL_2: (u32, u32) = (500, 500);
pub const DURATION: u64 = 5;
use crate::mcaptcha::defense::*;
pub fn get_defense() -> Defense {
DefenseBuilder::default()
.add_level(
LevelBuilder::default()
.visitor_threshold(LEVEL_1.0)
.difficulty_factor(LEVEL_1.1)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.add_level(
LevelBuilder::default()
.visitor_threshold(LEVEL_2.0)
.difficulty_factor(LEVEL_2.1)
.unwrap()
.build()
.unwrap(),
)
.unwrap()
.build()
.unwrap()
}
async fn race(manager: &Manager, id: String, count: (u32, u32)) {
let msg = ManagerMessages::AddVisitor(id);
for _ in 0..count.0 as usize - 1 {
manager.add_visitor(&msg);
}
}
// pub fn get_counter() -> Counter {
// get_mcaptcha().into()
// }
pub fn get_mcaptcha() -> MCaptcha {
MCaptchaBuilder::default()
.defense(get_defense())
.duration(DURATION)
.build()
.unwrap()
}
#[actix_rt::test]
async fn manager_works() {
let manager = Manager::new(1);
// let get_add_site_msg = |id: String, mcaptcha: MCaptcha| {
// AddSiteBuilder::default()
// .id(id)
// .mcaptcha(mcaptcha)
// .build()
// .unwrap()
// };
let id = "yo";
manager.add_captcha(Arc::new(get_mcaptcha()), id.into());
let mcaptcha_addr = manager.get_captcha(id);
assert!(mcaptcha_addr.is_some());
let mut mcaptcha_data = manager.get_internal_data();
mcaptcha_data.get_mut(id).unwrap().add_visitor();
mcaptcha_data.get_mut(id).unwrap().add_visitor();
mcaptcha_data.get_mut(id).unwrap().add_visitor();
// let mcaptcha_data: HashMap<String, libmcaptcha::mcaptcha::MCaptcha> = {
// let serialized = serde_json::to_string(&mcaptcha_data).unwrap();
// serde_json::from_str(&serialized).unwrap()
// };
// println!("{:?}", mcaptcha_data);
manager.set_internal_data(mcaptcha_data);
let mcaptcha_data = manager.get_internal_data();
assert_eq!(
manager.get_captcha(id).unwrap().get_visitors(),
mcaptcha_data.get(id).unwrap().get_visitors()
);
let new_id = "yoyo";
manager.rename(id, new_id.into());
{
let mcaptcha_addr = manager.get_captcha(new_id);
assert!(mcaptcha_addr.is_some());
let addr_doesnt_exist = manager.get_captcha(id);
assert!(addr_doesnt_exist.is_none());
let timer_expire = Duration::new(DURATION, 0);
tokio::time::sleep(timer_expire).await;
tokio::time::sleep(timer_expire).await;
}
// Manager::gc(manager.captchas.clone());
// let mcaptcha_addr = manager.get_captcha(new_id);
// assert_eq!(mcaptcha_addr.as_ref().unwrap().get_visitors(), 0);
// assert!(mcaptcha_addr.is_none());
//
// assert!(
// manager.rm_captcha(new_id.into()).is_some());
}
#[actix_rt::test]
async fn counter_defense_works() {
let manager = Manager::new(1);
let id = "yo";
manager.add_captcha(Arc::new(get_mcaptcha()), id.into());
let mut mcaptcha = manager
.add_visitor(&ManagerMessages::AddVisitor(id.to_string()))
.unwrap();
assert_eq!(mcaptcha.difficulty_factor, LEVEL_1.0);
race(&manager, id.to_string(), LEVEL_2).await;
mcaptcha = manager
.add_visitor(&ManagerMessages::AddVisitor(id.to_string()))
.unwrap();
assert_eq!(mcaptcha.difficulty_factor, LEVEL_2.1);
tokio::time::sleep(Duration::new(DURATION * 2, 0)).await;
assert_eq!(manager.get_captcha(id).unwrap().get_visitors(), 0);
}
}
//
//#[cfg(test)]
//pub mod tests {
// use super::*;
// use crate::defense::*;
// use crate::errors::*;
// use crate::mcaptcha;
// use crate::mcaptcha::MCaptchaBuilder;
//
// // constants for testing
// // (visitor count, level)
// pub const LEVEL_1: (u32, u32) = (50, 50);
// pub const LEVEL_2: (u32, u32) = (500, 500);
// pub const DURATION: u64 = 5;
//
// type MyActor = Addr<Counter>;
//
// pub fn get_defense() -> Defense {
// DefenseBuilder::default()
// .add_level(
// LevelBuilder::default()
// .visitor_threshold(LEVEL_1.0)
// .difficulty_factor(LEVEL_1.1)
// .unwrap()
// .build()
// .unwrap(),
// )
// .unwrap()
// .add_level(
// LevelBuilder::default()
// .visitor_threshold(LEVEL_2.0)
// .difficulty_factor(LEVEL_2.1)
// .unwrap()
// .build()
// .unwrap(),
// )
// .unwrap()
// .build()
// .unwrap()
// }
//
// async fn race(addr: Addr<Counter>, count: (u32, u32)) {
// for _ in 0..count.0 as usize - 1 {
// let _ = addr.send(AddVisitor).await.unwrap();
// }
// }
//
// pub fn get_counter() -> Counter {
// get_mcaptcha().into()
// }
//
// pub fn get_mcaptcha() -> MCaptcha {
// MCaptchaBuilder::default()
// .defense(get_defense())
// .duration(DURATION)
// .build()
// .unwrap()
// }
//
// #[test]
// fn mcaptcha_decrement_by_works() {
// let mut m = get_mcaptcha();
// for _ in 0..100 {
// m.add_visitor();
// }
// m.decrement_visitor_by(50);
// assert_eq!(m.get_visitors(), 50);
// m.decrement_visitor_by(500);
// assert_eq!(m.get_visitors(), 0);
// }
//
//
// #[actix_rt::test]
// async fn counter_defense_loosenup_works() {
// //use actix::clock::sleep;
// //use actix::clock::delay_for;
// let addr: MyActor = get_counter().start();
//
// race(addr.clone(), LEVEL_2).await;
// race(addr.clone(), LEVEL_2).await;
// let mut mcaptcha = addr.send(AddVisitor).await.unwrap();
// assert_eq!(mcaptcha.difficulty_factor, LEVEL_2.1);
//
// let duration = Duration::new(DURATION, 0);
// sleep(duration).await;
// //delay_for(duration).await;
//
// mcaptcha = addr.send(AddVisitor).await.unwrap();
// assert_eq!(mcaptcha.difficulty_factor, LEVEL_1.1);
// }
//
// #[test]
// fn test_mcatcptha_builder() {
// let defense = get_defense();
// let m = MCaptchaBuilder::default()
// .duration(0)
// .defense(defense.clone())
// .build();
//
// assert_eq!(m.err(), Some(CaptchaError::CaptchaDurationZero));
//
// let m = MCaptchaBuilder::default().duration(30).build();
// assert_eq!(
// m.err(),
// Some(CaptchaError::PleaseSetValue("defense".into()))
// );
//
// let m = MCaptchaBuilder::default().defense(defense).build();
// assert_eq!(
// m.err(),
// Some(CaptchaError::PleaseSetValue("duration".into()))
// );
// }
//
// #[actix_rt::test]
// async fn get_current_visitor_count_works() {
// let addr: MyActor = get_counter().start();
//
// addr.send(AddVisitor).await.unwrap();
// addr.send(AddVisitor).await.unwrap();
// addr.send(AddVisitor).await.unwrap();
// addr.send(AddVisitor).await.unwrap();
// let count = addr.send(GetCurrentVisitorCount).await.unwrap();
//
// assert_eq!(count, 4);
// }
//
// #[actix_rt::test]
// #[should_panic]
// async fn stop_works() {
// let addr: MyActor = get_counter().start();
// addr.send(Stop).await.unwrap();
// addr.send(AddVisitor).await.unwrap();
// }
//
// #[actix_rt::test]
// async fn get_set_internal_data_works() {
// let addr: MyActor = get_counter().start();
// let mut mcaptcha = addr.send(GetInternalData).await.unwrap();
// mcaptcha.add_visitor();
// addr.send(SetInternalData(mcaptcha.clone())).await.unwrap();
// assert_eq!(
// addr.send(GetInternalData).await.unwrap().get_visitors(),
// mcaptcha.get_visitors()
// );
//
// let duration = Duration::new(mcaptcha.get_duration() + 3, 0);
// sleep(duration).await;
// assert_eq!(addr.send(GetCurrentVisitorCount).await.unwrap(), 0);
// }
//
// #[actix_rt::test]
// async fn bulk_delete_works() {
// let addr: MyActor = get_counter().start();
// addr.send(AddVisitor).await.unwrap();
// addr.send(AddVisitor).await.unwrap();
// assert_eq!(addr.send(GetCurrentVisitorCount).await.unwrap(), 2);
// addr.send(BulkDecrement(3)).await.unwrap();
// assert_eq!(addr.send(GetCurrentVisitorCount).await.unwrap(), 0);
// }
//}

3
src/mcaptcha/mod.rs Normal file
View file

@ -0,0 +1,3 @@
pub mod cache;
mod defense;
pub mod mcaptcha;

View file

@ -1,157 +0,0 @@
/*
* mCaptcha - A proof of work based DoS protection system
* Copyright © 2023 Aravinth Manivannan <realravinth@batsense.net>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use actix_web::get;
use actix_web::post;
use actix_web::web;
use actix_web::web::Data;
use actix_web::Responder;
use libmcaptcha::cache::messages::{CachedPoWConfig, RetrivePoW, VerifyCaptchaResult};
use libmcaptcha::master::messages::GetInternalData;
use serde::Deserialize;
use serde::Serialize;
use web::Json;
use crate::app::DcacheApp;
use crate::store::DcacheRequest;
#[post("/write")]
pub async fn write(
app: Data<DcacheApp>,
req: Json<DcacheRequest>,
) -> actix_web::Result<impl Responder> {
let response = app.raft.client_write(req.0).await;
Ok(Json(response))
}
#[get("/state")]
pub async fn state(app: Data<DcacheApp>) -> actix_web::Result<impl Responder> {
let sm = app.store.state_machine.read().await;
let resp = sm
.data
.master
.send(GetInternalData)
.await
.unwrap()
.await
.unwrap()
.unwrap();
Ok(Json(resp))
}
#[derive(Serialize, Deserialize, Clone, Debug)]
pub enum ReadRequest {
RetrivePoW(RetrivePoW), //Reader
VerifyCaptchaResult(VerifyCaptchaResult), //Reader
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum ReadResponse {
VerifyCaptchaResult(bool),
RetrivePoW(Option<CachedPoWConfig>),
}
#[post("/read")]
pub async fn read(
app: Data<DcacheApp>,
req: Json<ReadRequest>,
) -> actix_web::Result<impl Responder> {
let sm = app.store.state_machine.read().await;
let req = req.into_inner();
let res = match req {
ReadRequest::RetrivePoW(msg) => {
let cache_res = sm
.data
.cache
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
ReadResponse::RetrivePoW(cache_res)
}
ReadRequest::VerifyCaptchaResult(msg) => {
let cache_res = sm
.data
.cache
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
ReadResponse::VerifyCaptchaResult(cache_res)
}
};
Ok(Json(res))
}
#[post("/pipeline/read")]
pub async fn pipeline_read(
app: Data<DcacheApp>,
requests: Json<Vec<ReadRequest>>,
) -> actix_web::Result<impl Responder> {
let requests = requests.into_inner();
let mut responses = Vec::with_capacity(requests.len());
let sm = app.store.state_machine.read().await;
for request in requests {
let res = match request {
ReadRequest::RetrivePoW(msg) => {
let cache_res = sm
.data
.cache
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
ReadResponse::RetrivePoW(cache_res)
}
ReadRequest::VerifyCaptchaResult(msg) => {
let cache_res = sm
.data
.cache
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
ReadResponse::VerifyCaptchaResult(cache_res)
}
};
responses.push(res);
}
Ok(Json(responses))
}
#[post("/pipeline/write")]
pub async fn pipeline_write(
app: Data<DcacheApp>,
requests: Json<Vec<DcacheRequest>>,
) -> actix_web::Result<impl Responder> {
let mut responses = Vec::with_capacity(requests.len());
let mut requests = requests.into_inner();
for req in requests.drain(0..) {
responses.push(app.raft.client_write(req).await);
}
Ok(Json(responses))
}

View file

@ -15,66 +15,15 @@
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use std::collections::BTreeMap;
use std::collections::BTreeSet;
use std::collections::HashMap;
use std::sync::Arc;
use actix_web::get;
use actix_web::post;
use actix_web::web;
use actix_web::web::Data;
use actix_web::Responder;
use openraft::error::Infallible;
use openraft::BasicNode;
use openraft::RaftMetrics;
use web::Json;
//use actix_web::web;
//use actix_web::web::Data;
use crate::app::DcacheApp;
use crate::DcacheNodeId;
#[post("/add-learner")]
pub async fn add_learner(
app: Data<DcacheApp>,
req: Json<(DcacheNodeId, String)>,
) -> actix_web::Result<impl Responder> {
let node_id = req.0 .0;
let node = BasicNode {
addr: req.0 .1.clone(),
};
let res = app.raft.add_learner(node_id, node, true).await;
Ok(Json(res))
}
#[post("/change-membership")]
pub async fn change_membership(
app: Data<DcacheApp>,
req: Json<BTreeSet<DcacheNodeId>>,
) -> actix_web::Result<impl Responder> {
let res = app.raft.change_membership(req.0, false).await;
Ok(Json(res))
}
#[post("/init")]
pub async fn init(app: Data<DcacheApp>) -> actix_web::Result<impl Responder> {
let mut nodes = BTreeMap::new();
nodes.insert(
app.id,
BasicNode {
addr: app.addr.clone(),
},
);
let res = app.raft.initialize(nodes).await;
Ok(Json(res))
}
#[get("/metrics")]
pub async fn metrics(app: Data<DcacheApp>) -> actix_web::Result<impl Responder> {
let metrics = app.raft.metrics().borrow().clone();
let res: Result<RaftMetrics<DcacheNodeId, BasicNode>, Infallible> = Ok(metrics);
Ok(Json(res))
}
use tokio::sync::mpsc;
#[derive(Debug)]
@ -87,7 +36,7 @@ pub struct HealthMetrics;
impl HealthMetrics {
pub async fn spawn(
app: Data<DcacheApp>,
app: Arc<DcacheApp>,
threshold: usize,
mut rx: mpsc::Receiver<HealthStatus>,
) -> tokio::task::JoinHandle<()> {
@ -114,7 +63,7 @@ impl HealthMetrics {
new_nodes.push(*node.0);
}
let res =
let _res =
app.raft.change_membership(new_nodes, false).await.unwrap();
}
} else {
@ -128,20 +77,3 @@ impl HealthMetrics {
tokio::spawn(fut)
}
}
//#[get("/self/remove/{id}")]
//pub async fn remove_node(app: Data<DcacheApp>, id: web::Path<u64>) -> actix_web::Result<impl Responder> {
// let cluster_metrics = app.raft.metrics().borrow().clone();
// let remote_id: u64 = 3;
// let mut new_nodes: Vec<DcacheNodeId> = Vec::new();
// for node in cluster_metrics.membership_config.nodes() {
// if *node.0 == remote_id {
// continue;
// }
//
// new_nodes.push(*node.0);
// }
//
// let res = app.raft.change_membership(new_nodes, false).await;
// Ok(Json(res))
//}

View file

@ -15,7 +15,7 @@
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
pub mod api;
//pub mod api;
pub mod management;
pub mod raft;
//pub mod raft;
pub mod raft_network_impl;

View file

@ -1,58 +0,0 @@
/*
* mCaptcha - A proof of work based DoS protection system
* Copyright © 2023 Aravinth Manivannan <realravinth@batsense.net>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use actix_web::post;
use actix_web::web;
use actix_web::web::Data;
use actix_web::Responder;
use openraft::raft::AppendEntriesRequest;
use openraft::raft::InstallSnapshotRequest;
use openraft::raft::VoteRequest;
use web::Json;
use crate::app::DcacheApp;
use crate::DcacheNodeId;
use crate::DcacheTypeConfig;
// --- Raft communication
#[post("/raft-vote")]
pub async fn vote(
app: Data<DcacheApp>,
req: Json<VoteRequest<DcacheNodeId>>,
) -> actix_web::Result<impl Responder> {
let res = app.raft.vote(req.0).await;
Ok(Json(res))
}
#[post("/raft-append")]
pub async fn append(
app: Data<DcacheApp>,
req: Json<AppendEntriesRequest<DcacheTypeConfig>>,
) -> actix_web::Result<impl Responder> {
let res = app.raft.append_entries(req.0).await;
Ok(Json(res))
}
#[post("/raft-snapshot")]
pub async fn snapshot(
app: Data<DcacheApp>,
req: Json<InstallSnapshotRequest<DcacheTypeConfig>>,
) -> actix_web::Result<impl Responder> {
let res = app.raft.install_snapshot(req.0).await;
Ok(Json(res))
}

View file

@ -1,6 +1,3 @@
use std::collections::BTreeMap;
use std::collections::BTreeSet;
use std::collections::HashSet;
/*
* mCaptcha - A proof of work based DoS protection system
* Copyright © 2023 Aravinth Manivannan <realravinth@batsense.net>
@ -18,18 +15,17 @@ use std::collections::HashSet;
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
use std::collections::HashMap;
use std::sync::Arc;
use std::sync::RwLock;
use std::time::Duration;
use std::time::Instant;
use super::management::HealthStatus;
use crate::DcacheNodeId;
use crate::DcacheTypeConfig;
use async_trait::async_trait;
use openraft::error::InstallSnapshotError;
use openraft::error::NetworkError;
use openraft::error::RPCError;
use openraft::error::RaftError;
use openraft::error::RemoteError;
use openraft::raft::AppendEntriesRequest;
use openraft::raft::AppendEntriesResponse;
use openraft::raft::InstallSnapshotRequest;
@ -39,67 +35,130 @@ use openraft::raft::VoteResponse;
use openraft::BasicNode;
use openraft::RaftNetwork;
use openraft::RaftNetworkFactory;
use reqwest::Client;
use serde::de::DeserializeOwned;
use serde::Serialize;
use tokio::sync::mpsc::Sender;
use tonic::transport::channel::Channel;
use tower_service::Service;
use super::management::HealthStatus;
use crate::DcacheNodeId;
use crate::DcacheTypeConfig;
use crate::pool::*;
use crate::protobuf::dcache::dcache_service_client::DcacheServiceClient;
use crate::protobuf::dcache::RaftRequest;
#[derive(Debug)]
struct ChannelManager {}
#[async_trait]
impl ItemManager for ChannelManager {
type Key = String;
type Item = Channel;
type Error = tonic::transport::Error;
async fn build(&self, addr: &Self::Key) -> Result<Channel, tonic::transport::Error> {
tonic::transport::Endpoint::new(addr.clone())?
.connect()
.await
}
async fn check(&self, mut ch: Channel) -> Result<Channel, tonic::transport::Error> {
futures::future::poll_fn(|cx| (&mut ch).poll_ready(cx)).await?;
Ok(ch)
}
}
#[derive(Clone)]
pub struct DcacheNetwork {
pub signal: Sender<HealthStatus>,
pub client: Client,
conn_pool: Pool<ChannelManager>,
}
pub enum RPCType {
Vote,
Snapshot,
Append,
}
impl DcacheNetwork {
pub fn new(signal: Sender<HealthStatus>, client: Client) -> Self {
Self { signal, client }
pub fn new(signal: Sender<HealthStatus>) -> Self {
let mgr = ChannelManager {};
Self {
signal,
conn_pool: Pool::new(mgr, Duration::from_millis(50)),
}
}
pub async fn send_rpc<Req, Resp, Err>(
&self,
target: DcacheNodeId,
target_node: &BasicNode,
uri: &str,
req: Req,
event: RPCType,
) -> Result<Resp, RPCError<DcacheNodeId, BasicNode, Err>>
where
Req: Serialize,
Err: std::error::Error + DeserializeOwned,
Resp: DeserializeOwned,
{
let addr = &target_node.addr;
let mut client = self.make_client(&target, target_node).await;
let url = format!("http://{}/{}", addr, uri);
let res = match event {
RPCType::Vote => {
client
.vote(RaftRequest {
data: serde_json::to_string(&req).unwrap(),
})
.await
}
tracing::debug!("send_rpc to url: {}", url);
RPCType::Snapshot => {
client
.install_snapshot(RaftRequest {
data: serde_json::to_string(&req).unwrap(),
})
.await
}
let resp = match self.client.post(url).json(&req).send().await {
Ok(resp) => Ok(resp),
RPCType::Append => {
client
.append_entries(RaftRequest {
data: serde_json::to_string(&req).unwrap(),
})
.await
}
};
match res {
Ok(res) => {
let signal2 = self.signal.clone();
let fut = async move {
let _ = signal2.send(HealthStatus::Healthy(target)).await;
};
tokio::spawn(fut);
let res = res.into_inner();
Ok(serde_json::from_str(&res.data).unwrap())
}
Err(e) => {
self.signal.send(HealthStatus::Down(target)).await;
let _ = self.signal.send(HealthStatus::Down(target)).await;
Err(RPCError::Network(NetworkError::new(&e)))
}
}?;
tracing::debug!("client.post() is sent");
let res: Result<Resp, Err> = resp
.json()
.await
.map_err(|e| RPCError::Network(NetworkError::new(&e)))?;
let res = res.map_err(|e| RPCError::RemoteError(RemoteError::new(target, e)));
if res.is_ok() {
let signal2 = self.signal.clone();
let fut = async move {
let _ = signal2.send(HealthStatus::Healthy(target)).await;
};
tokio::spawn(fut);
}
res
}
pub async fn make_client(
&self,
target: &DcacheNodeId,
target_node: &BasicNode,
) -> DcacheServiceClient<Channel> {
let addr = format!("http://{}", &target_node.addr);
tracing::debug!("connect: target={}: {}", target, addr);
let channel = self.conn_pool.get(&addr).await.unwrap();
let client = DcacheServiceClient::new(channel);
tracing::info!("connected: target={}: {}", target, addr);
client
}
}
@ -134,7 +193,7 @@ impl RaftNetwork<DcacheTypeConfig> for DcacheNetworkConnection {
RPCError<DcacheNodeId, BasicNode, RaftError<DcacheNodeId>>,
> {
self.owner
.send_rpc(self.target, &self.target_node, "raft-append", req)
.send_rpc(self.target, &self.target_node, req, RPCType::Append)
.await
}
@ -146,7 +205,7 @@ impl RaftNetwork<DcacheTypeConfig> for DcacheNetworkConnection {
RPCError<DcacheNodeId, BasicNode, RaftError<DcacheNodeId, InstallSnapshotError>>,
> {
self.owner
.send_rpc(self.target, &self.target_node, "raft-snapshot", req)
.send_rpc(self.target, &self.target_node, req, RPCType::Append)
.await
}
@ -158,7 +217,7 @@ impl RaftNetwork<DcacheTypeConfig> for DcacheNetworkConnection {
RPCError<DcacheNodeId, BasicNode, RaftError<DcacheNodeId>>,
> {
self.owner
.send_rpc(self.target, &self.target_node, "raft-vote", req)
.send_rpc(self.target, &self.target_node, req, RPCType::Vote)
.await
}
}

154
src/pool.rs Normal file
View file

@ -0,0 +1,154 @@
use std::collections::HashMap;
use std::fmt::Debug;
use std::hash::Hash;
use std::marker::PhantomData;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use async_trait::async_trait;
use tokio::time::sleep;
//use log::debug;
//use crate::base::tokio;
pub type PoolItem<T> = Arc<tokio::sync::Mutex<Option<T>>>;
/// To build or check an item.
///
/// When an item is requested, ItemManager `build()` one for the pool.
/// When an item is reused, ItemManager `check()` if it is still valid.
#[async_trait]
pub trait ItemManager {
type Key;
type Item;
type Error;
/// Make a new item to put into the pool.
///
/// An impl should hold that an item returned by `build()` is passed `check()`.
async fn build(&self, key: &Self::Key) -> Result<Self::Item, Self::Error>;
/// Check if an existent item still valid.
///
/// E.g.: check if a tcp connection still alive.
/// If the item is valid, `check` should return it in a Ok().
/// Otherwise, the item should be dropped and `check` returns an Err().
async fn check(&self, item: Self::Item) -> Result<Self::Item, Self::Error>;
}
/// Pool assumes the items in it is `Clone`, thus it keeps only one item for each key.
#[allow(clippy::type_complexity)]
#[derive(Debug, Clone)]
pub struct Pool<Mgr>
where
Mgr: ItemManager + Debug,
{
/// The first sleep time when `build()` fails.
/// The next sleep time is 2 times of the previous one.
pub initial_retry_interval: Duration,
/// Pooled items indexed by key.
pub items: Arc<Mutex<HashMap<Mgr::Key, PoolItem<Mgr::Item>>>>,
manager: Mgr,
err_type: PhantomData<Mgr::Error>,
n_retries: u32,
}
impl<Mgr> Pool<Mgr>
where
Mgr: ItemManager + Debug,
Mgr::Key: Clone + Eq + Hash + Send + Debug,
Mgr::Item: Clone + Sync + Send + Debug,
Mgr::Error: Sync + Debug,
{
pub fn new(manager: Mgr, initial_retry_interval: Duration) -> Self {
Pool {
initial_retry_interval,
items: Default::default(),
manager,
err_type: Default::default(),
n_retries: 3,
}
}
pub fn with_retries(mut self, retries: u32) -> Self {
self.n_retries = retries;
self
}
pub fn item_manager(&self) -> &Mgr {
&self.manager
}
/// Return an raw pool item.
///
/// The returned one may be an uninitialized one, i.e., it contains a None.
/// The lock for `items` should not be held for long, e.g. when `build()` a new connection, it takes dozen ms.
fn get_pool_item(&self, key: &Mgr::Key) -> PoolItem<Mgr::Item> {
let mut items = self.items.lock().unwrap();
if let Some(item) = items.get(key) {
item.clone()
} else {
let item = PoolItem::default();
items.insert(key.clone(), item.clone());
item
}
}
/// Return a item, by cloning an existent one or making a new one.
///
/// When returning an existent one, `check()` will be called on it to ensure it is still valid.
/// E.g., when returning a tcp connection.
// #[logcall::logcall(err = "debug")]
// #[minitrace::trace]
pub async fn get(&self, key: &Mgr::Key) -> Result<Mgr::Item, Mgr::Error> {
let pool_item = self.get_pool_item(key);
let mut guard = pool_item.lock().await;
let item_opt = (*guard).clone();
if let Some(ref item) = item_opt {
let check_res = self.manager.check(item.clone()).await;
// debug!("check reused item res: {:?}", check_res);
if let Ok(itm) = check_res {
return Ok(itm);
} else {
// mark broken conn as deleted
*guard = None;
}
}
let mut interval = self.initial_retry_interval;
for i in 0..self.n_retries {
// debug!("build new item of key: {:?}", key);
let new_item = self.manager.build(key).await;
// debug!("build new item of key res: {:?}", new_item);
match new_item {
Ok(x) => {
*guard = Some(x.clone());
return Ok(x);
}
Err(err) => {
if i == self.n_retries - 1 {
return Err(err);
}
}
}
sleep(interval).await;
interval *= 2;
}
unreachable!("the loop should always return!");
}
}

546
src/protobuf.rs Normal file
View file

@ -0,0 +1,546 @@
use std::sync::Arc;
use libmcaptcha::cache::messages as CacheMessages;
use libmcaptcha::defense;
use libmcaptcha::master::messages as MasterMessages;
use libmcaptcha::mcaptcha;
use openraft::BasicNode;
use serde::de::DeserializeOwned;
use serde::Serialize;
use tonic::Response;
use dcache::dcache_request::DcacheRequest as PipelineReq;
use dcache::dcache_response::DcacheResponse as InnerPipelineRes;
use dcache::dcache_service_server::DcacheService;
use dcache::DcacheResponse as OuterPipelineRes;
use dcache::{Learner, RaftReply, RaftRequest};
use crate::app::DcacheApp;
use crate::store::{DcacheRequest, DcacheResponse};
pub mod dcache {
tonic::include_proto!("dcache"); // The string specified here must match the proto package name
}
#[derive(Clone)]
pub struct MyDcacheImpl {
app: Arc<DcacheApp>,
}
impl MyDcacheImpl {
pub fn new(app: Arc<DcacheApp>) -> Self {
Self { app }
}
}
#[tonic::async_trait]
impl DcacheService for MyDcacheImpl {
async fn add_learner(
&self,
request: tonic::Request<Learner>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
let req = request.into_inner();
let node_id = req.id;
let node = BasicNode {
addr: req.addr.clone(),
};
println!("Learner added: {:?}", &req.addr);
let res = self.app.raft.add_learner(node_id, node, true).await;
Ok(Response::new(res.into()))
}
async fn add_captcha(
&self,
request: tonic::Request<dcache::AddCaptchaRequest>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::AddCaptcha(req.into()))
.await;
Ok(Response::new(res.into()))
}
async fn add_visitor(
&self,
request: tonic::Request<dcache::CaptchaId>,
) -> std::result::Result<tonic::Response<dcache::OptionAddVisitorResult>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::AddVisitor(MasterMessages::AddVisitor(
req.id,
)))
.await
.map_err(|e| {
tonic::Status::new(tonic::Code::Internal, serde_json::to_string(&e).unwrap())
})?;
match res.data {
DcacheResponse::AddVisitorResult(res) => {
Ok(Response::new(dcache::OptionAddVisitorResult {
result: res.map(|f| f.into()),
}))
}
_ => unimplemented!(),
}
}
async fn rename_captcha(
&self,
request: tonic::Request<dcache::RenameCaptchaRequest>,
) -> std::result::Result<tonic::Response<dcache::RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::RenameCaptcha(req.into()))
.await;
Ok(Response::new(res.into()))
}
async fn remove_captcha(
&self,
request: tonic::Request<dcache::CaptchaId>,
) -> std::result::Result<tonic::Response<dcache::RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::RemoveCaptcha(MasterMessages::RemoveCaptcha(
req.id,
)))
.await;
Ok(Response::new(res.into()))
}
async fn cache_pow(
&self,
request: tonic::Request<dcache::CachePowRequest>,
) -> std::result::Result<tonic::Response<dcache::RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::CachePoW(req.into()))
.await;
Ok(Response::new(res.into()))
}
async fn retrieve_pow(
&self,
request: tonic::Request<dcache::RetrievePowRequest>,
) -> std::result::Result<tonic::Response<dcache::OptionalRetrievePoWResponse>, tonic::Status>
{
let req = request.into_inner();
let sm = self.app.store.state_machine.read().await;
let res = sm.results.retrieve_pow_config(req.into());
Ok(Response::new(dcache::OptionalRetrievePoWResponse {
result: res.map(|x| x.into()),
}))
}
async fn delete_pow(
&self,
request: tonic::Request<dcache::DeletePowRequest>,
) -> std::result::Result<tonic::Response<dcache::RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::DeletePoW(CacheMessages::DeletePoW(
req.string,
)))
.await;
Ok(Response::new(res.into()))
}
async fn cache_result(
&self,
request: tonic::Request<dcache::CacheResultRequest>,
) -> std::result::Result<tonic::Response<dcache::RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::CacheResult(req.into()))
.await;
Ok(Response::new(res.into()))
}
async fn verify_captcha_result(
&self,
request: tonic::Request<dcache::RetrievePowRequest>,
) -> std::result::Result<tonic::Response<dcache::CaptchaResultVerified>, tonic::Status> {
let req = request.into_inner();
let sm = self.app.store.state_machine.read().await;
let verified = sm.results.verify_captcha_result(req.into());
Ok(Response::new(dcache::CaptchaResultVerified { verified }))
}
async fn delete_captcha_result(
&self,
request: tonic::Request<dcache::DeleteCaptchaResultRequest>,
) -> std::result::Result<tonic::Response<dcache::RaftReply>, tonic::Status> {
let req = request.into_inner();
let res = self
.app
.raft
.client_write(DcacheRequest::DeleteCaptchaResult(
CacheMessages::DeleteCaptchaResult { token: req.token },
))
.await;
Ok(Response::new(res.into()))
}
async fn captcha_exists(
&self,
request: tonic::Request<dcache::CaptchaId>,
) -> std::result::Result<tonic::Response<dcache::CaptchaExistsResponse>, tonic::Status> {
let req = request.into_inner();
let sm = self.app.store.state_machine.read().await;
let exists = sm.counter.get_captcha(&req.id).is_some();
Ok(Response::new(dcache::CaptchaExistsResponse { exists }))
}
async fn get_visitor_count(
&self,
request: tonic::Request<dcache::CaptchaId>,
) -> std::result::Result<tonic::Response<dcache::OptionGetVisitorCountResponse>, tonic::Status>
{
let req = request.into_inner();
let sm = self.app.store.state_machine.read().await;
if let Some(captcha) = sm.counter.get_captcha(&req.id) {
let res = captcha.get_visitors();
Ok(Response::new(dcache::OptionGetVisitorCountResponse {
result: Some(dcache::GetVisitorCountResponse { visitors: res }),
}))
} else {
Ok(Response::new(dcache::OptionGetVisitorCountResponse {
result: None,
}))
}
}
// type PipelineDcacheOpsStream =
// Pin<Box<dyn Stream<Item = Result<OuterPipelineRes, tonic::Status>> + Send + 'static>>;
// async fn pipeline_dcache_ops(
// &self,
// request: tonic::Request<tonic::Streaming<dcache::DcacheRequest>>,
// ) -> std::result::Result<tonic::Response<Self::PipelineDcacheOpsStream>, tonic::Status> {
async fn pipeline_dcache_ops(
&self,
request: tonic::Request<dcache::DcacheBatchRequest>,
) -> Result<Response<dcache::DcacheBatchResponse>, tonic::Status> {
let mut reqs = request.into_inner();
let mut responses = Vec::with_capacity(reqs.requests.len());
for req in reqs.requests.drain(0..) {
let res = match req.dcache_request.unwrap() {
PipelineReq::AddCaptcha(add_captcha_req) => {
let res = self
.app
.raft
.client_write(DcacheRequest::AddCaptcha(add_captcha_req.into()))
.await;
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::Other(res.into())),
}
}
PipelineReq::AddVisitor(add_visitor_req) => {
let res = self
.app
.raft
.client_write(DcacheRequest::AddVisitor(MasterMessages::AddVisitor(
add_visitor_req.id,
)))
.await;
match res {
Err(_) => OuterPipelineRes {
dcache_response: None,
},
Ok(res) => match res.data {
DcacheResponse::AddVisitorResult(res) => {
let res = dcache::OptionAddVisitorResult {
result: res.map(|f| f.into()),
};
OuterPipelineRes {
dcache_response: Some(
InnerPipelineRes::OptionAddVisitorResult(res),
),
}
}
_ => unimplemented!(),
},
}
}
PipelineReq::RenameCaptcha(rename_captcha_req) => {
let res = self
.app
.raft
.client_write(DcacheRequest::RenameCaptcha(rename_captcha_req.into()))
.await;
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::Other(res.into())),
}
}
PipelineReq::RemoveCaptcha(remove_captcha_req) => {
let res = self
.app
.raft
.client_write(DcacheRequest::RemoveCaptcha(MasterMessages::RemoveCaptcha(
remove_captcha_req.id,
)))
.await;
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::Other(res.into())),
}
}
PipelineReq::CachePow(cache_pow_req) => {
let res = self
.app
.raft
.client_write(DcacheRequest::CachePoW(cache_pow_req.into()))
.await;
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::Other(res.into())),
}
}
PipelineReq::CacheResult(cache_result_req) => {
let res = self
.app
.raft
.client_write(DcacheRequest::CacheResult(cache_result_req.into()))
.await;
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::Other(res.into())),
}
}
PipelineReq::CaptchaExists(captcha_exists_req) => {
let sm = self.app.store.state_machine.read().await;
let exists = sm.counter.get_captcha(&captcha_exists_req.id).is_some();
let res = dcache::CaptchaExistsResponse { exists };
drop(sm);
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::CaptchaExists(res)),
}
}
PipelineReq::GetVisitorCount(get_visitor_count_req) => {
let sm = self.app.store.state_machine.read().await;
if let Some(captcha) = sm.counter.get_captcha(&get_visitor_count_req.id) {
let res = captcha.get_visitors();
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::GetVisitorCount(
dcache::OptionGetVisitorCountResponse {
result: Some(dcache::GetVisitorCountResponse { visitors: res }),
},
)),
}
} else {
OuterPipelineRes {
dcache_response: Some(InnerPipelineRes::GetVisitorCount(
dcache::OptionGetVisitorCountResponse { result: None },
)),
}
}
}
};
responses.push(res);
}
Ok(Response::new(dcache::DcacheBatchResponse { responses }))
}
async fn write(
&self,
request: tonic::Request<RaftRequest>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
let req = request.into_inner();
let req = serde_json::from_str(&req.data).unwrap();
let res = self.app.raft.client_write(req).await;
Ok(Response::new(res.into()))
}
/// / Forward a request to other
async fn forward(
&self,
_request: tonic::Request<RaftRequest>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
unimplemented!();
}
async fn append_entries(
&self,
request: tonic::Request<RaftRequest>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
let req = request.into_inner();
let req = serde_json::from_str(&req.data).unwrap();
let res = self.app.raft.append_entries(req).await;
Ok(Response::new(res.into()))
}
async fn install_snapshot(
&self,
request: tonic::Request<RaftRequest>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
let req = request.into_inner();
let req = serde_json::from_str(&req.data).unwrap();
let res = self.app.raft.install_snapshot(req).await;
Ok(Response::new(res.into()))
}
async fn vote(
&self,
request: tonic::Request<RaftRequest>,
) -> std::result::Result<tonic::Response<RaftReply>, tonic::Status> {
let req = request.into_inner();
let req = serde_json::from_str(&req.data).unwrap();
let res = self.app.raft.vote(req).await;
Ok(Response::new(res.into()))
}
}
impl<T, E> From<RaftReply> for Result<T, E>
where
T: DeserializeOwned,
E: DeserializeOwned,
{
fn from(msg: RaftReply) -> Self {
if !msg.data.is_empty() {
let resp: T = serde_json::from_str(&msg.data).expect("fail to deserialize");
Ok(resp)
} else {
let err: E = serde_json::from_str(&msg.error).expect("fail to deserialize");
Err(err)
}
}
}
impl<T, E> From<Result<T, E>> for RaftReply
where
T: Serialize,
E: Serialize,
{
fn from(r: Result<T, E>) -> Self {
match r {
Ok(x) => {
let data = serde_json::to_string(&x).expect("fail to serialize");
RaftReply {
data,
error: Default::default(),
}
}
Err(e) => {
let error = serde_json::to_string(&e).expect("fail to serialize");
RaftReply {
data: Default::default(),
error,
}
}
}
}
}
impl From<dcache::AddCaptchaRequest> for MasterMessages::AddSite {
fn from(value: dcache::AddCaptchaRequest) -> Self {
let req_mcaptcha = value.mcaptcha.unwrap();
let mut defense = req_mcaptcha.defense.unwrap();
let mut new_defense = defense::DefenseBuilder::default();
for level in defense.levels.drain(0..) {
new_defense
.add_level(
defense::LevelBuilder::default()
.difficulty_factor(level.difficulty_factor)
.unwrap()
.visitor_threshold(level.visitor_threshold)
.build()
.unwrap(),
)
.unwrap();
}
let defense = new_defense.build().unwrap();
let mcaptcha = mcaptcha::MCaptchaBuilder::default()
.defense(defense)
.duration(req_mcaptcha.duration)
.build()
.unwrap();
Self {
id: value.id,
mcaptcha,
}
}
}
impl From<libmcaptcha::master::AddVisitorResult> for dcache::AddVisitorResult {
fn from(value: libmcaptcha::master::AddVisitorResult) -> Self {
Self {
duration: value.duration,
difficulty_factor: value.difficulty_factor,
}
}
}
impl From<dcache::RenameCaptchaRequest> for MasterMessages::Rename {
fn from(value: dcache::RenameCaptchaRequest) -> Self {
Self {
name: value.name,
rename_to: value.rename_to,
}
}
}
impl From<dcache::CachePowRequest> for CacheMessages::CachePoW {
fn from(value: dcache::CachePowRequest) -> Self {
Self {
string: value.string,
difficulty_factor: value.difficulty_factor,
duration: value.duration,
key: value.key,
}
}
}
impl From<CacheMessages::CachePoW> for dcache::CachePowRequest {
fn from(value: CacheMessages::CachePoW) -> Self {
Self {
string: value.string,
difficulty_factor: value.difficulty_factor,
duration: value.duration,
key: value.key,
}
}
}
impl From<CacheMessages::CachedPoWConfig> for dcache::RetrievePowResponse {
fn from(value: CacheMessages::CachedPoWConfig) -> Self {
Self {
difficulty_factor: value.difficulty_factor,
duration: value.duration,
key: value.key,
}
}
}
impl From<dcache::CacheResultRequest> for CacheMessages::CacheResult {
fn from(value: dcache::CacheResultRequest) -> Self {
Self {
token: value.token,
key: value.key,
duration: value.duration,
}
}
}
impl From<dcache::RetrievePowRequest> for CacheMessages::VerifyCaptchaResult {
fn from(value: dcache::RetrievePowRequest) -> Self {
Self {
token: value.token,
key: value.key,
}
}
}

View file

@ -23,7 +23,6 @@ use std::ops::RangeBounds;
use std::sync::Arc;
use std::sync::Mutex;
use libmcaptcha::cache::messages::CachedPoWConfig;
use libmcaptcha::AddVisitorResult;
use libmcaptcha::MCaptcha;
use openraft::async_trait::async_trait;
@ -48,15 +47,11 @@ use openraft::Vote;
use serde::Deserialize;
use serde::Serialize;
use tokio::sync::RwLock;
use url::quirks::set_pathname;
use crate::DcacheNodeId;
use crate::DcacheTypeConfig;
use actix::prelude::*;
use libmcaptcha::cache::messages::{
CachePoW, CacheResult, DeleteCaptchaResult, DeletePoW, RetrivePoW, VerifyCaptchaResult,
};
use libmcaptcha::cache::messages::{CachePoW, CacheResult, DeleteCaptchaResult, DeletePoW};
use libmcaptcha::master::messages::{
AddSite as AddCaptcha, AddVisitor, GetInternalData, RemoveCaptcha, Rename as RenameCaptcha,
SetInternalData,
@ -99,7 +94,9 @@ pub struct DcacheStateMachine {
pub last_membership: StoredMembership<DcacheNodeId, BasicNode>,
/// Application data.
pub data: Arc<System<HashCache, EmbeddedMaster>>,
// pub data: Arc<System<HashCache, EmbeddedMaster>>,
pub counter: crate::mcaptcha::mcaptcha::Manager,
pub results: crate::mcaptcha::cache::HashCache,
}
#[derive(Serialize, Deserialize, Clone)]
@ -108,42 +105,34 @@ struct PersistableStateMachine {
last_membership: StoredMembership<DcacheNodeId, BasicNode>,
/// Application data.
data: HashMap<String, MCaptcha>,
counter: crate::mcaptcha::mcaptcha::Manager,
results: crate::mcaptcha::cache::HashCache,
}
impl PersistableStateMachine {
async fn from_statemachine(m: &DcacheStateMachine) -> Self {
let internal_data = m
.data
.master
.send(GetInternalData)
.await
.unwrap()
.await
.unwrap()
.unwrap();
let counter = m.counter.clone();
let results = m.results.clone();
Self {
last_applied_log: m.last_applied_log.clone(),
last_applied_log: m.last_applied_log,
last_membership: m.last_membership.clone(),
data: internal_data,
counter,
results,
}
}
async fn to_statemachine(
self,
data: Arc<System<HashCache, EmbeddedMaster>>,
counter: crate::mcaptcha::mcaptcha::Manager,
results: crate::mcaptcha::cache::HashCache,
) -> DcacheStateMachine {
data.master
.send(SetInternalData {
mcaptcha: self.data,
})
.await
.unwrap();
self.counter.clean_all_after_cold_start(counter).await;
self.results.clean_all_after_cold_start(results).await;
DcacheStateMachine {
last_applied_log: self.last_applied_log,
last_membership: self.last_membership,
data,
results: self.results,
counter: self.counter,
}
}
}
@ -170,7 +159,8 @@ impl DcacheStore {
let state_machine = RwLock::new(DcacheStateMachine {
last_applied_log: Default::default(),
last_membership: Default::default(),
data: system::init_system(salt),
counter: crate::mcaptcha::mcaptcha::Manager::new(30),
results: crate::mcaptcha::cache::HashCache::default(),
});
Self {
@ -396,83 +386,42 @@ impl RaftStorage<DcacheTypeConfig> for Arc<DcacheStore> {
EntryPayload::Blank => res.push(DcacheResponse::Empty),
EntryPayload::Normal(ref req) => match req {
DcacheRequest::AddVisitor(msg) => {
let r = sm
.data
.master
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
let r = sm.counter.add_visitor(msg);
res.push(DcacheResponse::AddVisitorResult(r));
}
DcacheRequest::AddCaptcha(msg) => {
sm.data
.master
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
sm.counter
.add_captcha(Arc::new((&msg.mcaptcha).into()), msg.id.clone());
res.push(DcacheResponse::Empty);
}
DcacheRequest::RenameCaptcha(msg) => {
sm.data
.master
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
sm.counter.rename(&msg.name, msg.rename_to.clone());
res.push(DcacheResponse::Empty);
}
DcacheRequest::RemoveCaptcha(msg) => {
sm.data
.master
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
sm.counter.rm_captcha(&msg.0);
res.push(DcacheResponse::Empty);
}
// cache
DcacheRequest::CachePoW(msg) => {
sm.data
.cache
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
sm.results.cache_pow(msg.clone());
res.push(DcacheResponse::Empty);
}
DcacheRequest::DeletePoW(msg) => {
sm.data.cache.send(msg.clone()).await.unwrap().unwrap();
sm.results.remove_pow_config(&msg.0);
// sm.data.cache.send(msg.clone()).await.unwrap().unwrap();
res.push(DcacheResponse::Empty);
}
DcacheRequest::CacheResult(msg) => {
sm.data
.cache
.send(msg.clone())
.await
.unwrap()
.await
.unwrap()
.unwrap();
sm.results.cache_result(msg.clone());
res.push(DcacheResponse::Empty);
}
DcacheRequest::DeleteCaptchaResult(msg) => {
sm.data.cache.send(msg.clone()).await.unwrap().unwrap();
sm.results.remove_cache_result(&msg.token);
res.push(DcacheResponse::Empty);
}
},
@ -519,7 +468,7 @@ impl RaftStorage<DcacheTypeConfig> for Arc<DcacheStore> {
})?;
let mut state_machine = self.state_machine.write().await;
let updated_state_machine = updated_state_machine
.to_statemachine(state_machine.data.clone())
.to_statemachine(state_machine.counter.clone(), state_machine.results.clone())
.await;
*state_machine = updated_state_machine;
}
@ -554,3 +503,19 @@ impl RaftStorage<DcacheTypeConfig> for Arc<DcacheStore> {
self.clone()
}
}
#[cfg(test)]
mod tests {
use super::*;
async fn provision_dcache_store() -> Arc<DcacheStore> {
Arc::new(DcacheStore::new(
"adsfasdfasdfadsfadfadfadfadsfasdfasdfasdfasdf".into(),
))
}
#[test]
fn test_dcache_store() {
openraft::testing::Suite::test_all(provision_dcache_store).unwrap();
}
}

83
test.py
View file

@ -17,6 +17,14 @@
from pprint import pprint
import requests
import grpc
import json
from dcache_py import dcache_pb2 as dcache
from dcache_py.dcache_pb2 import RaftRequest
from dcache_py.dcache_pb2_grpc import DcacheServiceStub
# import dcache_py.dcache_resources
def init(host: str):
@ -82,12 +90,13 @@ host = "localhost:9001"
peers = [(2, "localhost:9002"), (3, "localhost:9003"), (4, "localhost:9004")]
captcha_id = "test_1"
def initialize_cluster():
init(host)
for peer_id, peer in peers:
add_host(host=host, id=peer_id, peer=peer)
switch_to_cluster(host, nodes=[1, 2,3,4])
switch_to_cluster(host, nodes=[1, 2, 3, 4])
add_captcha(host, captcha_id)
add_vote(host, captcha_id)
@ -95,7 +104,75 @@ def initialize_cluster():
add_vote(host, captcha_id)
def grpc_add_vote(stub: DcacheServiceStub, captcha_id: str):
msg = dcache.CaptchaID(id=captcha_id)
# msg = RaftRequest(data=json.dumps({"AddVisitor": captcha_id}))
# resp = stub.Write(msg)
resp = stub.AddVisitor(msg)
pprint(resp)
def grpc_add_captcha(stub: DcacheServiceStub, captcha_id: str):
msg = dcache.AddCaptchaRequest(
id=captcha_id,
mcaptcha=dcache.MCaptcha(
duration=30,
defense=dcache.Defense(
levels=[
dcache.Level(visitor_threshold=50, difficulty_factor=500),
dcache.Level(visitor_threshold=5000, difficulty_factor=50000),
]
),
),
)
# params = {
# "AddCaptcha": {
# "id": captcha_id,
# "mcaptcha": {
# "defense": {
# "levels": [
# {"visitor_threshold": 50, "difficulty_factor": 500},
# {"visitor_threshold": 5000, "difficulty_factor": 50000},
# ],
# "current_visitor_threshold": 0,
# },
# "duration": 30,
# },
# }
# }
# msg = RaftRequest(data = json.dumps(params))
resp = stub.AddCaptcha(msg)
pprint(f"Captcha added {captcha_id}: {resp}")
msgs = []
for _ in range(0,1000):
msgs.append(
dcache.DcacheRequest(addVisitor=dcache.CaptchaID(id=captcha_id)),
)
msgs = dcache.DcacheBatchRequest(requests=msgs)
def grpc_pipeline_add_vote(stub):
responses = stub.PipelineDcacheOps(msgs)
for r in responses.responses:
print(f"received respo: {r}")
def grpc_run():
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
grpc_add_captcha(stub, captcha_id)
grpc_pipeline_add_vote(stub)
#grpc_add_vote(stub, captcha_id)
if __name__ == "__main__":
add_vote("localhost:9002", captcha_id)
grpc_run()
# add_vote("localhost:9002", captcha_id)

138
tests/.gitignore vendored Normal file
View file

@ -0,0 +1,138 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/

93
tests/bucket.py Normal file
View file

@ -0,0 +1,93 @@
#!/bin/env /usr/bin/python3
# # Copyright (C) 2021 Aravinth Manivannan <realaravinth@batsense.net>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from asyncio import sleep
import sys
import json
from mcaptcha import register
from dcache import grpc_add_vote, grpc_get_visitor_count
def incr(key):
return grpc_add_vote(key)
def get_count(key):
try:
count = grpc_get_visitor_count(key)
return int(count.visitors)
except:
return 0
def assert_count(expect, key):
count = get_count(key)
assert count == expect
async def incr_one_works():
try:
key = "incr_one"
register(key)
initial_count = get_count(key)
# incriment
incr(key)
assert_count(initial_count + 1, key)
# wait till expiry
await sleep(5 + 2)
assert_count(initial_count, key)
print("[*] Incr one works")
except Exception as e:
raise e
async def race_works():
key = "race_works"
try:
register(key)
initial_count = get_count(key)
race_num = 200
for _ in range(race_num):
incr(key)
assert_count(initial_count + race_num, key)
# wait till expiry
await sleep(5 + 2)
assert_count(initial_count, key)
print("[*] Race works")
except Exception as e:
raise e
async def difficulty_works():
key = "difficulty_works"
try:
register(key)
data = incr(key)
assert data.difficulty_factor == 50
for _ in range(501):
incr(key)
data = incr(key)
assert data.difficulty_factor == 500
await sleep(5 + 2)
data = incr(key)
assert data.difficulty_factor == 50
print("[*] Difficulty factor works")
except Exception as e:
raise e

126
tests/dcache.py Normal file
View file

@ -0,0 +1,126 @@
import requests
import grpc
import json
from dcache_py import dcache_pb2 as dcache
from dcache_py.dcache_pb2 import RaftRequest
from dcache_py.dcache_pb2_grpc import DcacheServiceStub
host = "localhost:9001"
def grpc_add_vote(captcha_id: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.CaptchaID(id=captcha_id)
resp = stub.AddVisitor(msg)
return resp.result
def grpc_add_captcha(captcha_id: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.AddCaptchaRequest(
id=captcha_id,
mcaptcha=dcache.MCaptcha(
duration=5,
defense=dcache.Defense(
levels=[
dcache.Level(visitor_threshold=50, difficulty_factor=50),
dcache.Level(visitor_threshold=500, difficulty_factor=500),
]
),
),
)
resp = stub.AddCaptcha(msg)
return resp
def grpc_captcha_exists(captcha_id: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.CaptchaID(id=captcha_id)
resp = stub.CaptchaExists(msg)
return resp.exists
def grpc_rename_captcha(captcha_id: str, new_id: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.RenameCaptchaRequest(name=captcha_id, rename_to=new_id)
resp = stub.RenameCaptcha(msg)
def grpc_delete_captcha(captcha_id: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.CaptchaID(id=captcha_id)
stub.RemoveCaptcha(msg)
def grpc_get_visitor_count(captcha_id: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.CaptchaID(id=captcha_id)
return stub.GetVisitorCount(msg).result
def grpc_add_challenge(token: str, key: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.CacheResultRequest(
token=token,
key=key,
duration=5,
)
stub.CacheResult(msg)
def grpc_get_challenge(token: str, key: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.RetrievePowRequest(
token=token,
key=key,
)
return stub.VerifyCaptchaResult(msg)
def grpc_delete_challenge(token: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.DeleteCaptchaResultRequest(
token=token,
)
stub.DeleteCaptchaResult(msg)
def grpc_add_pow(token: str, string: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.CachePowRequest(
key=token, string=string, duration=5, difficulty_factor=500
)
return stub.CachePow(msg)
def grpc_get_pow(token: str, string: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.RetrievePowRequest(token=string, key=token)
resp = stub.RetrievePow(msg)
return resp
def grpc_delete_pow(string: str):
with grpc.insecure_channel(host) as channel:
stub = DcacheServiceStub(channel)
msg = dcache.DeletePowRequest(
string=string,
)
stub.DeletePow(msg)

87
tests/mcaptcha.py Normal file
View file

@ -0,0 +1,87 @@
#!/bin/env /usr/bin/python3
#
# Copyright (C) 2021 Aravinth Manivannan <realaravinth@batsense.net>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import json
from dcache import (
grpc_add_captcha,
grpc_add_vote,
grpc_captcha_exists,
grpc_rename_captcha,
)
from dcache import grpc_delete_captcha
def delete_captcha(key):
grpc_delete_captcha(key)
def add_captcha(key):
grpc_add_captcha(key)
def rename_captcha(key, new_key):
grpc_rename_captcha(key, new_key)
def captcha_exists(key):
return grpc_captcha_exists(captcha_id=key)
def register(key):
if captcha_exists(key):
delete_captcha(key)
add_captcha(key)
async def captcha_exists_works():
key = "captcha_delete_works"
if captcha_exists(key):
delete_captcha(key)
assert captcha_exists(key) is False
register(key)
assert captcha_exists(key) is True
print("[*] Captcha delete works")
async def register_captcha_works():
key = "register_captcha_works"
register(key)
assert captcha_exists(key) is True
print("[*] Add captcha works")
async def delete_captcha_works():
key = "delete_captcha_works"
register(key)
exists = captcha_exists(key)
assert exists is True
delete_captcha(key)
assert captcha_exists(key) is False
print("[*] Delete captcha works")
async def rename_captcha_works():
key = "rename_captcha_works"
new_key = "new_key_rename_captcha_works"
register(key)
exists = captcha_exists(key)
assert exists is True
rename_captcha(key, new_key)
print(captcha_exists(key))
assert captcha_exists(key) is False
assert captcha_exists(new_key) is True
print("[*] Rename captcha works")

119
tests/pow.py Normal file
View file

@ -0,0 +1,119 @@
#!/bin/env /usr/bin/python3
#
# Copyright (C) 2023 Aravinth Manivannan <realaravinth@batsense.net>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from asyncio import sleep
import json
from dcache import grpc_add_pow, grpc_get_pow, grpc_delete_pow
# 1. Check duplicate pow
# 2. Create pow
# 3. Read non-existent pow
# 4. Read pow
# 5. Read expired pow
def add_pow(captcha, pow):
"""Add pow to"""
try:
res = grpc_add_pow(captcha, pow)
return res
except Exception as e:
return e
def get_pow_from(captcha, pow):
"""Add pow to"""
try:
res = grpc_get_pow(captcha, pow)
if res.HasField("result"):
return res.result
else:
return None
except Exception as e:
return e
def delete_pow(captcha, pow):
"""Add pow to"""
try:
grpc_delete_pow(pow)
except Exception as e:
return e
async def add_pow_works():
"""Test: Add pow"""
try:
key = "add_pow"
pow_name = "add_pow_pow"
add_pow(key, pow_name)
stored_pow = get_pow_from(key, pow_name)
assert stored_pow.difficulty_factor == 500
assert stored_pow.duration == 5
print("[*] Add pow works")
except Exception as e:
raise e
async def pow_ttl_works():
"""Test: pow TTL"""
try:
key = "ttl_pow"
pow_name = "ttl_pow_pow"
add_pow(key, pow_name)
await sleep(5 + 2)
error = get_pow_from(key, pow_name)
assert error is None
print("[*] pow TTL works")
except Exception as e:
raise e
async def pow_doesnt_exist():
"""Test: Non-existent pow"""
try:
pow_name = "nonexistent_pow"
key = "nonexistent_pow_key"
error = get_pow_from(key, pow_name)
assert error is None
print("[*] pow Doesn't Exist works")
except Exception as e:
raise e
async def delete_pow_works():
"""Test: Delete pows"""
try:
pow_name = "delete_pow"
key = "delete_pow_key"
# pow = get_pow(pow_name)
add_pow(key, pow_name)
delete_pow(key, pow_name)
error = get_pow_from(key, pow_name)
assert error is None
print("[*] Delete pow works")
except Exception as e:
raise e

Some files were not shown because too many files have changed in this diff Show more