Since the issue indexer has been refactored, the issue overview webpage
is built by the `buildIssueOverview` function and underlying
`indexer.Search` function and `GetIssueStats` instead of
`GetUserIssueStats`. So the function is no longer used.
I moved the relevant tests to `indexer_test.go` and since the search
option changed from `IssueOptions` to `SearchOptions`, most of the tests
are useless now.
We need more tests about the db indexer because those tests are highly
connected with the issue overview webpage and now this page has several
bugs.
Any advice about those test cases is appreciated.
---------
Co-authored-by: CaiCandong <50507092+CaiCandong@users.noreply.github.com>
Fix #26723
Add `ChangeDefaultBranch` to the `notifier` interface and implement it
in `indexerNotifier`. So when changing the default branch,
`indexerNotifier` sends a message to the `indexer queue` to update the
index.
---------
Co-authored-by: techknowlogick <matti@mdranta.net>
Unfortunately, when a system setting hasn't been stored in the database,
it cannot be cached.
Meanwhile, this PR also uses context cache for push email avatar display
which should avoid to read user table via email address again and again.
According to my local test, this should reduce dashboard elapsed time
from 150ms -> 80ms .
Currently, Artifact does not have an expiration and automatic cleanup
mechanism, and this feature needs to be added. It contains the following
key points:
- [x] add global artifact retention days option in config file. Default
value is 90 days.
- [x] add cron task to clean up expired artifacts. It should run once a
day.
- [x] support custom retention period from `retention-days: 5` in
`upload-artifact@v3`.
- [x] artifacts link in actions view should be non-clickable text when
expired.
> ### Description
> If a new branch is pushed, and the repository has a rule that would
require signed commits for the new branch, the commit is rejected with a
500 error regardless of whether it's signed.
>
> When pushing a new branch, the "old" commit is the empty ID
(0000000000000000000000000000000000000000). verifyCommits has no
provision for this and passes an invalid commit range to git rev-list.
Prior to 1.19 this wasn't an issue because only pre-existing individual
branches could be protected.
>
> I was able to reproduce with
[try.gitea.io/CraigTest/test](https://try.gitea.io/CraigTest/test),
which is set up with a blanket rule to require commits on all branches.
Fix #25565
Very thanks to @Craig-Holmquist-NTI for reporting the bug and suggesting
an valid solution!
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
spec:
https://docs.github.com/en/rest/actions/secrets?apiVersion=2022-11-28#create-or-update-a-repository-secret
- Add a new route for creating or updating a secret value in a
repository
- Create a new file `routers/api/v1/repo/action.go` with the
implementation of the `CreateOrUpdateSecret` function
- Update the Swagger documentation for the `updateRepoSecret` operation
in the `v1_json.tmpl` template file
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
In PR #26786, the Go version for golangci-lint is bumped to 1.21. This
causes the following error:
```
models/migrations/v1_16/v210.go:132:23: SA1019: elliptic.Marshal has been deprecated since Go 1.21: for ECDH, use the crypto/ecdh package. This function returns an encoding equivalent to that of PublicKey.Bytes in crypto/ecdh. (staticcheck)
PublicKey: elliptic.Marshal(elliptic.P256(), parsed.PubKey.X, parsed.PubKey.Y),
```
The change now uses [func (*PublicKey)
ECDH](https://pkg.go.dev/crypto/ecdsa#PublicKey.ECDH), which is added in
Go 1.20.
- Resolves https://codeberg.org/forgejo/forgejo/issues/580
- Return a `upload_field` to any release API response, which points to
the API URL for uploading new assets.
- Adds unit test.
- Adds integration testing to verify URL is returned correctly and that
upload endpoint actually works
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>
Replace #22751
1. only support the default branch in the repository setting.
2. autoload schedule data from the schedule table after starting the
service.
3. support specific syntax like `@yearly`, `@monthly`, `@weekly`,
`@daily`, `@hourly`
## How to use
See the [GitHub Actions
document](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule)
for getting more detailed information.
```yaml
on:
schedule:
- cron: '30 5 * * 1,3'
- cron: '30 5 * * 2,4'
jobs:
test_schedule:
runs-on: ubuntu-latest
steps:
- name: Not on Monday or Wednesday
if: github.event.schedule != '30 5 * * 1,3'
run: echo "This step will be skipped on Monday and Wednesday"
- name: Every time
run: echo "This step will always run"
```
Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR has multiple parts, and I didn't split them because
it's not easy to test them separately since they are all about the
dashboard page for issues.
1. Support counting issues via indexer to fix #26361
2. Fix repo selection so it also fixes #26653
3. Keep keywords in filter links.
The first two are regressions of #26012.
After:
https://github.com/go-gitea/gitea/assets/9418365/71dfea7e-d9e2-42b6-851a-cc081435c946
Thanks to @CaiCandong for helping with some tests.
- Add a new function `CountOrgSecrets` in the file
`models/secret/secret.go`
- Add a new file `modules/structs/secret.go`
- Add a new function `ListActionsSecrets` in the file
`routers/api/v1/api.go`
- Add a new file `routers/api/v1/org/action.go`
- Add a new function `listActionsSecrets` in the file
`routers/api/v1/org/action.go`
go-sdk: https://gitea.com/gitea/go-sdk/pulls/629
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <matti@mdranta.net>
Co-authored-by: Giteabot <teabot@gitea.io>
## Archived labels
This adds the structure to allow for archived labels.
Archived labels are, just like closed milestones or projects, a medium to hide information without deleting it.
It is especially useful if there are outdated labels that should no longer be used without deleting the label entirely.
## Changes
1. UI and API have been equipped with the support to mark a label as archived
2. The time when a label has been archived will be stored in the DB
## Outsourced for the future
There's no special handling for archived labels at the moment.
This will be done in the future.
## Screenshots
![image](https://github.com/go-gitea/gitea/assets/80308335/208f95cd-42e4-4ed7-9a1f-cd2050a645d4)
![image](https://github.com/go-gitea/gitea/assets/80308335/746428e0-40bb-45b3-b992-85602feb371d)
Part of https://github.com/go-gitea/gitea/issues/25237
---------
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes #25564
Fixes #23191
- Api v2 search endpoint should return only the latest version matching
the query
- Api v3 search endpoint should return `take` packages not package
versions
The xorm `Sync2` has already been deprecated in favor of `Sync`,
so let's do the same inside the Gitea codebase.
Command used to replace everything:
```sh
for i in $(ag Sync2 --files-with-matches); do vim $i -c ':%sno/Sync2/Sync/g' -c ':wq'; done
```
This PR refactors a bunch of projects-related code, mostly the
templates.
The following things were done:
- rename boards to columns in frontend code
- use the new `ctx.Locale.Tr` method
- cleanup template, remove useless newlines, classes, comments
- merge org-/user and repo level project template together
- move "new column" button into project toolbar
- move issue card (shared by projects and pinned issues) to shared
template, remove useless duplicated styles
- add search function to projects (to make the layout more similar to
milestones list where it is inherited from 😆)
- maybe more changes I forgot I've done 😆
Closes #24893
After:
![Bildschirmfoto vom 2023-08-10
23-02-00](https://github.com/go-gitea/gitea/assets/47871822/cab61456-1d23-4373-8163-e567f1b3b5f9)
![Bildschirmfoto vom 2023-08-10
23-02-26](https://github.com/go-gitea/gitea/assets/47871822/94b55d60-5572-48eb-8111-538a52d8bcc6)
![Bildschirmfoto vom 2023-08-10
23-02-46](https://github.com/go-gitea/gitea/assets/47871822/a0358f4b-4e05-4194-a7bc-6e0ecba5a9b6)
---------
Co-authored-by: silverwind <me@silverwind.io>
Even if GetDisplayName() is normally preferred elsewhere, this change
provides more consistency, as usernames are also always being shown
when participating in a conversation taking place in an issue or
a pull request. This change makes conversations easier to follow, as
you would not have to have a mental association between someone's
username and someone's real name in order to follow what is happening.
This behavior matches GitHub's. Optimally, both the username and the
full name (if applicable) could be shown, but such an effort is a
much bigger task that needs to be thought out well.
Fix #26129
Replace #26258
This PR will introduce a transaction on creating pull request so that if
some step failed, it will rollback totally. And there will be no dirty
pull request exist.
---------
Co-authored-by: Giteabot <teabot@gitea.io>
This PR is an extended implementation of #25189 and builds upon the
proposal by @hickford in #25653, utilizing some ideas proposed
internally by @wxiaoguang.
Mainly, this PR consists of a mechanism to pre-register OAuth2
applications on startup, which can be enabled or disabled by modifying
the `[oauth2].DEFAULT_APPLICATIONS` parameter in app.ini. The OAuth2
applications registered this way are being marked as "locked" and
neither be deleted nor edited over UI to prevent confusing/unexpected
behavior. Instead, they're being removed if no longer enabled in config.
![grafik](https://github.com/go-gitea/gitea/assets/47871822/81a78b1c-4b68-40a7-9e99-c272ebb8f62e)
The implemented mechanism can also be used to pre-register other OAuth2
applications in the future, if wanted.
Co-authored-by: hickford <mirth.hickford@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
---------
Co-authored-by: M Hickford <mirth.hickford@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
- The permalink and 'Reference in New issue' URL of an renderable file
(those where you can see the source and a rendered version of it, such
as markdown) doesn't contain `?display=source`. This leads the issue
that the URL doesn't have any effect, as by default the rendered version
is shown and thus not the source.
- Add `?display=source` to the permalink URL and to 'Reference in New
Issue' if it's renderable file.
- Add integration testing.
Refs: https://codeberg.org/forgejo/forgejo/pulls/1088
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: Giteabot <teabot@gitea.io>
I noticed that `issue_service.CreateComment` adds transaction operations
on `issues_model.CreateComment`, we can merge the two functions and we
can avoid calling each other's methods in the `services` layer.
Co-authored-by: Giteabot <teabot@gitea.io>
Related to #26239
This PR makes some fixes:
- do not show the prompt for mirror repos and repos with pull request
units disabled
- use `commit_time` instead of `updated_unix`, as `commit_time` is the
real time when the branch was pushed
Fix #24662.
Replace #24822 and #25708 (although it has been merged)
## Background
In the past, Gitea supported issue searching with a keyword and
conditions in a less efficient way. It worked by searching for issues
with the keyword and obtaining limited IDs (as it is heavy to get all)
on the indexer (bleve/elasticsearch/meilisearch), and then querying with
conditions on the database to find a subset of the found IDs. This is
why the results could be incomplete.
To solve this issue, we need to store all fields that could be used as
conditions in the indexer and support both keyword and additional
conditions when searching with the indexer.
## Major changes
- Redefine `IndexerData` to include all fields that could be used as
filter conditions.
- Refactor `Search(ctx context.Context, kw string, repoIDs []int64,
limit, start int, state string)` to `Search(ctx context.Context, options
*SearchOptions)`, so it supports more conditions now.
- Change the data type stored in `issueIndexerQueue`. Use
`IndexerMetadata` instead of `IndexerData` in case the data has been
updated while it is in the queue. This also reduces the storage size of
the queue.
- Enhance searching with Bleve/Elasticsearch/Meilisearch, make them
fully support `SearchOptions`. Also, update the data versions.
- Keep most logic of database indexer, but remove
`issues.SearchIssueIDsByKeyword` in `models` to avoid confusion where is
the entry point to search issues.
- Start a Meilisearch instance to test it in unit tests.
- Add unit tests with almost full coverage to test
Bleve/Elasticsearch/Meilisearch indexer.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
In the original implementation, we can only get the first 30 records of
the commit status (the default paging size), if the commit status is
more than 30, it will lead to the bug #25990. I made the following two
changes.
- On the page, use the ` db.ListOptions{ListAll: true}` parameter
instead of `db.ListOptions{}`
- The `GetLatestCommitStatus` function makes a determination as to
whether or not a pager is being used.
fixed #25990
The API should only return the real Mail of a User, if the caller is
logged in. The check do to this don't work. This PR fixes this. This not
really a security issue, but can lead to Spam.
---------
Co-authored-by: silverwind <me@silverwind.io>
Fix #25934
Add `ignoreGlobal` parameter to `reqUnitAccess` and only check global
disabled units when `ignoreGlobal` is true. So the org-level projects
and user-level projects won't be affected by global disabled
`repo.projects` unit.
Fixes #25918
The migration fails on MSSQL because xorm tries to update the primary
key column. xorm prevents this if the column is marked as auto
increment:
c622cdaf89/internal/statements/update.go (L38-L40)
I think it would be better if xorm would check for primary key columns
here because updating such columns is bad practice. It looks like if
that auto increment check should do the same.
fyi @lunny
This PR
- Fix #26093. Replace `time.Time` with `timeutil.TimeStamp`
- Fix #26135. Add missing `xorm:"extends"` to `CountLFSMetaObject` for
LFS meta object query
- Add a unit test for LFS meta object garbage collection
Before:
![image](https://github.com/go-gitea/gitea/assets/18380374/8c5643b5-5c16-4674-9fe6-9e7fa2dda0b9)
After:
![image](https://github.com/go-gitea/gitea/assets/18380374/caf8891b-14df-418d-a7eb-977b54b9e9be)
There's a bug in the recent logic, `CalcCommitStatus` will always return
the first item of `statuses` or error status, because `state` is defined
with default value which should be `CommitStatusSuccess`
Then
``` golang
if status.State.NoBetterThan(state) {
```
this `if` will always return false unless `status.State =
CommitStatusError` which makes no sense.
So `lastStatus` will always be `nil` or error status.
Then we will always return the first item of `statuses` here or only
return error status, and this is why in the first picture the commit
status is `Success` but not `Failure`.
af1ffbcd63/models/git/commit_status.go (L204-L211)
Co-authored-by: Giteabot <teabot@gitea.io>
- cancel running jobs if the event is push
- Add a new function `CancelRunningJobs` to cancel all running jobs of a
run
- Update `FindRunOptions` struct to include `Ref` field and update its
condition in `toConds` function
- Implement auto cancellation of running jobs in the same workflow in
`notify` function
related task: https://github.com/go-gitea/gitea/pull/22751/
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Signed-off-by: appleboy <appleboy.tw@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: delvh <dev.lh@web.de>
Close #24544
Changes:
- Create `action_tasks_version` table to store the latest version of
each scope (global, org and repo).
- When a job with the status of `waiting` is created, the tasks version
of the scopes it belongs to will increase.
- When the status of a job already in the database is updated to
`waiting`, the tasks version of the scopes it belongs to will increase.
- On Gitea side, in `FeatchTask()`, will try to query the
`action_tasks_version` record of the scope of the runner that call
`FetchTask()`. If the record does not exist, will insert a row. Then,
Gitea will compare the version passed from runner to Gitea with the
version in database, if inconsistent, try pick task. Gitea always
returns the latest version from database to the runner.
Related:
- Protocol: https://gitea.com/gitea/actions-proto-def/pulls/10
- Runner: https://gitea.com/gitea/act_runner/pulls/219
To avoid deadlock problem, almost database related functions should be
have ctx as the first parameter.
This PR do a refactor for some of these functions.
Fix #25776. Close #25826.
In the discussion of #25776, @wolfogre's suggestion was to remove the
commit status of `running` and `warning` to keep it consistent with
github.
references:
-
https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28#about-commit-statuses
## ⚠️ BREAKING ⚠️
So the commit status of Gitea will be consistent with GitHub, only
`pending`, `success`, `error` and `failure`, while `warning` and
`running` are not supported anymore.
---------
Co-authored-by: Jason Song <i@wolfogre.com>
current actions artifacts implementation only support single file
artifact. To support multiple files uploading, it needs:
- save each file to each db record with same run-id, same artifact-name
and proper artifact-path
- need change artifact uploading url without artifact-id, multiple files
creates multiple artifact-ids
- support `path` in download-artifact action. artifact should download
to `{path}/{artifact-path}`.
- in repo action view, it provides zip download link in artifacts list
in summary page, no matter this artifact contains single or multiple
files.
Before:
![image](https://github.com/go-gitea/gitea/assets/18380374/1ab476dc-2f9b-4c85-9e87-105fc73af1ee)
After:
![image](https://github.com/go-gitea/gitea/assets/18380374/786f984d-5c27-4eff-b3d9-159f68034ce4)
This issue comes from the change in #25468.
`LoadProject` will always return at least one record, so we use
`ProjectID` to check whether an issue is linked to a project in the old
code.
As other `issue.LoadXXX` functions, we need to check the return value
from `xorm.Session.Get`.
In recent unit tests, we only test `issueList.LoadAttributes()` but
don't test `issue.LoadAttributes()`. So I added a new test for
`issue.LoadAttributes()` in this PR.
---------
Co-authored-by: Denys Konovalov <privat@denyskon.de>
Fixes (?) #25538
Fixes https://codeberg.org/forgejo/forgejo/issues/972
Regression #23879#23879 introduced a change which prevents read access to packages if a
user is not a member of an organization.
That PR also contained a change which disallows package access if the
team unit is configured with "no access" for packages. I don't think
this change makes sense (at the moment). It may be relevant for private
orgs. But for public or limited orgs that's useless because an
unauthorized user would have more access rights than the team member.
This PR restores the old behaviour "If a user has read access for an
owner, they can read packages".
---------
Co-authored-by: Giteabot <teabot@gitea.io>
related #16865
This PR adds an accessibility check before mounting container blobs.
---------
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: silverwind <me@silverwind.io>
This PR will display a pull request creation hint on the repository home
page when there are newly created branches with no pull request. Only
the recent 6 hours and 2 updated branches will be displayed.
Inspired by #14003
Replace #14003
Resolves #311
Resolves #13196
Resolves #23743
co-authored by @kolaente
Add a few extra test cases and test functions for the collaboration
model to get everything covered by tests (except for error handling, as
we cannot suddenly mock errors from the database).
Co-authored-by: Gusted <postmaster@gusted.xyz>
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/825
Co-authored-by: Gusted <gusted@noreply.codeberg.org>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Giteabot <teabot@gitea.io>
Fix #25558
Extract from #22743
This PR added a repository's check when creating/deleting branches via
API. Mirror repository and archive repository cannot do that.
When branch's commit CommitMessage is too long, the column maybe too
short.(TEXT 16K for mysql).
This PR will fix it to only store the summary because these message will
only show on branch list or possible future search?
Related #14180
Related #25233
Related #22639
Close #19786
Related #12763
This PR will change all the branches retrieve method from reading git
data to read database to reduce git read operations.
- [x] Sync git branches information into database when push git data
- [x] Create a new table `Branch`, merge some columns of `DeletedBranch`
into `Branch` table and drop the table `DeletedBranch`.
- [x] Read `Branch` table when visit `code` -> `branch` page
- [x] Read `Branch` table when list branch names in `code` page dropdown
- [x] Read `Branch` table when list git ref compare page
- [x] Provide a button in admin page to manually sync all branches.
- [x] Sync branches if repository is not empty but database branches are
empty when visiting pages with branches list
- [x] Use `commit_time desc` as the default FindBranch order by to keep
consistent as before and deleted branches will be always at the end.
---------
Co-authored-by: Jason Song <i@wolfogre.com>
releated to #21820
- Split `Size` in repository table as two new colunms, one is `GitSize`
for git size, the other is `LFSSize` for lfs data. still store full size
in `Size` colunm.
- Show full size on ui, but show each of them by a `title`; example:
![image](https://user-images.githubusercontent.com/25342410/218636251-e200f085-d7e7-4a25-9ff1-b586a63e07a9.png)
- Return full size in api response.
---------
Signed-off-by: a1012112796 <1012112796@qq.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: DmitryFrolovTri <23313323+DmitryFrolovTri@users.noreply.github.com>
Co-authored-by: Giteabot <teabot@gitea.io>
Fix #25451.
Bugfixes:
- When stopping the zombie or endless tasks, set `LogInStorage` to true
after transferring the file to storage. It was missing, it could write
to a nonexistent file in DBFS because `LogInStorage` was false.
- Always update `ActionTask.Updated` when there's a new state reported
by the runner, even if there's no change. This is to avoid the task
being judged as a zombie task.
Enhancement:
- Support `Stat()` for DBFS file.
- `WriteLogs` refuses to write if it could result in content holes.
---------
Co-authored-by: Giteabot <teabot@gitea.io>
Fix #25088
This PR adds the support for
[`pull_request_target`](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target)
workflow trigger. `pull_request_target` is similar to `pull_request`,
but the workflow triggered by the `pull_request_target` event runs in
the context of the base branch of the pull request rather than the head
branch. Since the workflow from the base is considered trusted, it can
access the secrets and doesn't need approvals to run.
The recent change on xorm for `Sync` is it will not warn when database
have columns which is not listed on struct. So we just need this warn
logs when `Sync` the whole database but not in the migrations Sync.
This PR will remove almost unnecessary warning logs on migrations.
Now below logs in CI will disappear.
```log
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column creator_id but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column is_closed but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column board_type but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column type but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column closed_date_unix but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column created_unix but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column updated_unix but struct has not related field
2023/06/23 17:51:32 models/db/engine.go:191:InitEngineWithMigration() [W] Table gtestschema.project has column card_type but struct has not related field
```
this will allow us to fully localize it later
PS: we can not migrate back as the old value was a one-way conversion
prepare for #25213
---
*Sponsored by Kithara Software GmbH*
# The problem
There were many "path tricks":
* By default, Gitea uses its program directory as its work path
* Gitea tries to use the "work path" to guess its "custom path" and
"custom conf (app.ini)"
* Users might want to use other directories as work path
* The non-default work path should be passed to Gitea by GITEA_WORK_DIR
or "--work-path"
* But some Gitea processes are started without these values
* The "serv" process started by OpenSSH server
* The CLI sub-commands started by site admin
* The paths are guessed by SetCustomPathAndConf again and again
* The default values of "work path / custom path / custom conf" can be
changed when compiling
# The solution
* Use `InitWorkPathAndCommonConfig` to handle these path tricks, and use
test code to cover its behaviors.
* When Gitea's web server runs, write the WORK_PATH to "app.ini", this
value must be the most correct one, because if this value is not right,
users would find that the web UI doesn't work and then they should be
able to fix it.
* Then all other sub-commands can use the WORK_PATH in app.ini to
initialize their paths.
* By the way, when Gitea starts for git protocol, it shouldn't output
any log, otherwise the git protocol gets broken and client blocks
forever.
The "work path" priority is: WORK_PATH in app.ini > cmd arg --work-path
> env var GITEA_WORK_DIR > builtin default
The "app.ini" searching order is: cmd arg --config > cmd arg "work path
/ custom path" > env var "work path / custom path" > builtin default
## ⚠️ BREAKING
If your instance's "work path / custom path / custom conf" doesn't meet
the requirements (eg: work path must be absolute), Gitea will report a
fatal error and exit. You need to set these values according to the
error log.
----
Close #24818
Close #24222
Close #21606
Close #21498
Close #25107
Close #24981
Maybe close #24503
Replace #23301
Replace #22754
And maybe more
Follow up #22405
Fix #20703
This PR rewrites storage configuration read sequences with some breaks
and tests. It becomes more strict than before and also fixed some
inherit problems.
- Move storage's MinioConfig struct into setting, so after the
configuration loading, the values will be stored into the struct but not
still on some section.
- All storages configurations should be stored on one section,
configuration items cannot be overrided by multiple sections. The
prioioty of configuration is `[attachment]` > `[storage.attachments]` |
`[storage.customized]` > `[storage]` > `default`
- For extra override configuration items, currently are `SERVE_DIRECT`,
`MINIO_BASE_PATH`, `MINIO_BUCKET`, which could be configured in another
section. The prioioty of the override configuration is `[attachment]` >
`[storage.attachments]` > `default`.
- Add more tests for storages configurations.
- Update the storage documentations.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
close #24540
related:
- Protocol: https://gitea.com/gitea/actions-proto-def/pulls/9
- Runner side: https://gitea.com/gitea/act_runner/pulls/201
changes:
- Add column of `labels` to table `action_runner`, and combine the value
of `agent_labels` and `custom_labels` column to `labels` column.
- Store `labels` when registering `act_runner`.
- Update `labels` when `act_runner` starting and calling `Declare`.
- Users cannot modify the `custom labels` in edit page any more.
other changes:
- Store `version` when registering `act_runner`.
- If runner is latest version, parse version from `Declare`. But older
version runner still parse version from request header.
Enable deduplication of unofficial reviews. When pull requests are
configured to include all approvers, not just official ones, in the
default merge messages it was possible to generate duplicated
Reviewed-by lines for a single person. Add an option to find only
distinct reviews for a given query.
fixes #24795
---------
Signed-off-by: Cory Todd <cory.todd@canonical.com>
Co-authored-by: Giteabot <teabot@gitea.io>
## Changes
- Adds the following high level access scopes, each with `read` and
`write` levels:
- `activitypub`
- `admin` (hidden if user is not a site admin)
- `misc`
- `notification`
- `organization`
- `package`
- `issue`
- `repository`
- `user`
- Adds new middleware function `tokenRequiresScopes()` in addition to
`reqToken()`
- `tokenRequiresScopes()` is used for each high-level api section
- _if_ a scoped token is present, checks that the required scope is
included based on the section and HTTP method
- `reqToken()` is used for individual routes
- checks that required authentication is present (but does not check
scope levels as this will already have been handled by
`tokenRequiresScopes()`
- Adds migration to convert old scoped access tokens to the new set of
scopes
- Updates the user interface for scope selection
### User interface example
<img width="903" alt="Screen Shot 2023-05-31 at 1 56 55 PM"
src="https://github.com/go-gitea/gitea/assets/23248839/654766ec-2143-4f59-9037-3b51600e32f3">
<img width="917" alt="Screen Shot 2023-05-31 at 1 56 43 PM"
src="https://github.com/go-gitea/gitea/assets/23248839/1ad64081-012c-4a73-b393-66b30352654c">
## tokenRequiresScopes Design Decision
- `tokenRequiresScopes()` was added to more reliably cover api routes.
For an incoming request, this function uses the given scope category
(say `AccessTokenScopeCategoryOrganization`) and the HTTP method (say
`DELETE`) and verifies that any scoped tokens in use include
`delete:organization`.
- `reqToken()` is used to enforce auth for individual routes that
require it. If a scoped token is not present for a request,
`tokenRequiresScopes()` will not return an error
## TODO
- [x] Alphabetize scope categories
- [x] Change 'public repos only' to a radio button (private vs public).
Also expand this to organizations
- [X] Disable token creation if no scopes selected. Alternatively, show
warning
- [x] `reqToken()` is missing from many `POST/DELETE` routes in the api.
`tokenRequiresScopes()` only checks that a given token has the correct
scope, `reqToken()` must be used to check that a token (or some other
auth) is present.
- _This should be addressed in this PR_
- [x] The migration should be reviewed very carefully in order to
minimize access changes to existing user tokens.
- _This should be addressed in this PR_
- [x] Link to api to swagger documentation, clarify what
read/write/delete levels correspond to
- [x] Review cases where more than one scope is needed as this directly
deviates from the api definition.
- _This should be addressed in this PR_
- For example:
```go
m.Group("/users/{username}/orgs", func() {
m.Get("", reqToken(), org.ListUserOrgs)
m.Get("/{org}/permissions", reqToken(), org.GetUserOrgsPermissions)
}, tokenRequiresScopes(auth_model.AccessTokenScopeCategoryUser,
auth_model.AccessTokenScopeCategoryOrganization),
context_service.UserAssignmentAPI())
```
## Future improvements
- [ ] Add required scopes to swagger documentation
- [ ] Redesign `reqToken()` to be opt-out rather than opt-in
- [ ] Subdivide scopes like `repository`
- [ ] Once a token is created, if it has no scopes, we should display
text instead of an empty bullet point
- [ ] If the 'public repos only' option is selected, should read
categories be selected by default
Closes #24501
Closes #24799
Co-authored-by: Jonathan Tran <jon@allspice.io>
Co-authored-by: Kyle D <kdumontnu@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Before, Gitea shows the database table stats on the `admin dashboard`
page.
It has some problems:
* `count(*)` is quite heavy. If tables have many records, this blocks
loading the admin page blocks for a long time
* Some users had even reported issues that they can't visit their admin
page because this page causes blocking or `50x error (reverse proxy
timeout)`
* The `actions` stat is not useful. The table is simply too large. Does
it really matter if it contains 1,000,000 rows or 9,999,999 rows?
* The translation `admin.dashboard.statistic_info` is difficult to
maintain.
So, this PR uses a separate page to show the stats and removes the
`actions` stat.
![image](https://github.com/go-gitea/gitea/assets/2114189/babf7c61-b93b-4a62-bfaa-22983636427e)
## ⚠️ BREAKING
The `actions` Prometheus metrics collector has been removed for the
reasons mentioned beforehand.
Please do not rely on its output anymore.
This addressees some things from #24406 that came up after the PR was
merged. Mostly from @delvh.
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: delvh <dev.lh@web.de>
This PR replaces all string refName as a type `git.RefName` to make the
code more maintainable.
Fix #15367
Replaces #23070
It also fixed a bug that tags are not sync because `git remote --prune
origin` will not remove local tags if remote removed.
We in fact should use `git fetch --prune --tags origin` but not `git
remote update origin` to do the sync.
Some answer from ChatGPT as ref.
> If the git fetch --prune --tags command is not working as expected,
there could be a few reasons why. Here are a few things to check:
>
>Make sure that you have the latest version of Git installed on your
system. You can check the version by running git --version in your
terminal. If you have an outdated version, try updating Git and see if
that resolves the issue.
>
>Check that your Git repository is properly configured to track the
remote repository's tags. You can check this by running git config
--get-all remote.origin.fetch and verifying that it includes
+refs/tags/*:refs/tags/*. If it does not, you can add it by running git
config --add remote.origin.fetch "+refs/tags/*:refs/tags/*".
>
>Verify that the tags you are trying to prune actually exist on the
remote repository. You can do this by running git ls-remote --tags
origin to list all the tags on the remote repository.
>
>Check if any local tags have been created that match the names of tags
on the remote repository. If so, these local tags may be preventing the
git fetch --prune --tags command from working properly. You can delete
local tags using the git tag -d command.
---------
Co-authored-by: delvh <dev.lh@web.de>
This adds the ability to pin important Issues and Pull Requests. You can
also move pinned Issues around to change their Position. Resolves #2175.
## Screenshots
![grafik](https://user-images.githubusercontent.com/15185051/235123207-0aa39869-bb48-45c3-abe2-ba1e836046ec.png)
![grafik](https://user-images.githubusercontent.com/15185051/235123297-152a16ea-a857-451d-9a42-61f2cd54dd75.png)
![grafik](https://user-images.githubusercontent.com/15185051/235640782-cbfe25ec-6254-479a-a3de-133e585d7a2d.png)
The Design was mostly copied from the Projects Board.
## Implementation
This uses a new `pin_order` Column in the `issue` table. If the value is
set to 0, the Issue is not pinned. If it's set to a bigger value, the
value is the Position. 1 means it's the first pinned Issue, 2 means it's
the second one etc. This is dived into Issues and Pull requests for each
Repo.
## TODO
- [x] You can currently pin as many Issues as you want. Maybe we should
add a Limit, which is configurable. GitHub uses 3, but I prefer 6, as
this is better for bigger Projects, but I'm open for suggestions.
- [x] Pin and Unpin events need to be added to the Issue history.
- [x] Tests
- [x] Migration
**The feature itself is currently fully working, so tester who may find
weird edge cases are very welcome!**
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Giteabot <teabot@gitea.io>
close https://github.com/go-gitea/gitea/issues/16321
Provided a webhook trigger for requesting someone to review the Pull
Request.
Some modifications have been made to the returned `PullRequestPayload`
based on the GitHub webhook settings, including:
- add a description of the current reviewer object as
`RequestedReviewer` .
- setting the action to either **review_requested** or
**review_request_removed** based on the operation.
- adding the `RequestedReviewers` field to the issues_model.PullRequest.
This field will be loaded into the PullRequest through
`LoadRequestedReviewers()` when `ToAPIPullRequest` is called.
After the Pull Request is merged, I will supplement the relevant
documentation.
## ⚠️ Breaking
The `log.<mode>.<logger>` style config has been dropped. If you used it,
please check the new config manual & app.example.ini to make your
instance output logs as expected.
Although many legacy options still work, it's encouraged to upgrade to
the new options.
The SMTP logger is deleted because SMTP is not suitable to collect logs.
If you have manually configured Gitea log options, please confirm the
logger system works as expected after upgrading.
## Description
Close #12082 and maybe more log-related issues, resolve some related
FIXMEs in old code (which seems unfixable before)
Just like rewriting queue #24505 : make code maintainable, clear legacy
bugs, and add the ability to support more writers (eg: JSON, structured
log)
There is a new document (with examples): `logging-config.en-us.md`
This PR is safer than the queue rewriting, because it's just for
logging, it won't break other logic.
## The old problems
The logging system is quite old and difficult to maintain:
* Unclear concepts: Logger, NamedLogger, MultiChannelledLogger,
SubLogger, EventLogger, WriterLogger etc
* Some code is diffuclt to konw whether it is right:
`log.DelNamedLogger("console")` vs `log.DelNamedLogger(log.DEFAULT)` vs
`log.DelLogger("console")`
* The old system heavily depends on ini config system, it's difficult to
create new logger for different purpose, and it's very fragile.
* The "color" trick is difficult to use and read, many colors are
unnecessary, and in the future structured log could help
* It's difficult to add other log formats, eg: JSON format
* The log outputer doesn't have full control of its goroutine, it's
difficult to make outputer have advanced behaviors
* The logs could be lost in some cases: eg: no Fatal error when using
CLI.
* Config options are passed by JSON, which is quite fragile.
* INI package makes the KEY in `[log]` section visible in `[log.sub1]`
and `[log.sub1.subA]`, this behavior is quite fragile and would cause
more unclear problems, and there is no strong requirement to support
`log.<mode>.<logger>` syntax.
## The new design
See `logger.go` for documents.
## Screenshot
<details>
![image](https://github.com/go-gitea/gitea/assets/2114189/4462d713-ba39-41f5-bb08-de912e67e1ff)
![image](https://github.com/go-gitea/gitea/assets/2114189/b188035e-f691-428b-8b2d-ff7b2199b2f9)
![image](https://github.com/go-gitea/gitea/assets/2114189/132e9745-1c3b-4e00-9e0d-15eaea495dee)
</details>
## TODO
* [x] add some new tests
* [x] fix some tests
* [x] test some sub-commands (manually ....)
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Giteabot <teabot@gitea.io>
This PR is a refactor at the beginning. And now it did 4 things.
- [x] Move renaming organizaiton and user logics into services layer and
merged as one function
- [x] Support rename a user capitalization only. For example, rename the
user from `Lunny` to `lunny`. We just need to change one table `user`
and others should not be touched.
- [x] Before this PR, some renaming were missed like `agit`
- [x] Fix bug the API reutrned from `http.StatusNoContent` to `http.StatusOK`
The topics are saved in the `repo_topic` table.
They are also saved directly in the `repository` table.
Before this PR, only `AddTopic` and `SaveTopics` made sure the `topics`
field in the `repository` table was synced with the `repo_topic` table.
This PR makes sure `GenerateTopics` and `DeleteTopic`
also sync the `topics` in the repository table.
`RemoveTopicsFromRepo` doesn't need to sync the data
as it is only used to delete a repository.
Fixes #24820
This PR
- [x] Move some functions from `issues.go` to `issue_stats.go` and
`issue_label.go`
- [x] Remove duplicated issue options `UserIssueStatsOption` to keep
only one `IssuesOptions`
This PR
- [x] Move some code from `issue.go` to `issue_search.go` and
`issue_update.go`
- [x] Use `IssuesOptions` instead of `IssueStatsOptions` becuase they
are too similiar.
- [x] Rename some functions
Fix #21324
In the current logic, if the `Actor` user is not an admin user, all
activities from private organizations won't be shown even if the `Actor`
user is a member of the organization.
As mentioned in the issue, when using deploy key to make a commit and
push, the activity's `act_user_id` will be the id of the organization so
the activity won't be shown to non-admin users because the visibility of
the organization is private.
55a5717760/models/activities/action.go (L490-L503)
This PR improves this logic so the activities of private organizations
can be shown.
Fixes #24145
To solve the bug, I added a "computed" `TargetBehind` field to the
`Release` model, which indicates the target branch of a release.
This is particularly useful if the target branch was deleted in the
meantime (or is empty).
I also did a micro-optimization in `calReleaseNumCommitsBehind`. Instead
of checking that a branch exists and then call `GetBranchCommit`, I
immediately call `GetBranchCommit` and handle the `git.ErrNotExist`
error.
This optimization is covered by the added unit test.
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
The "modules/context.go" is too large to maintain.
This PR splits it to separate files, eg: context_request.go,
context_response.go, context_serve.go
This PR will help:
1. The future refactoring for Gitea's web context (eg: simplify the context)
2. Introduce proper "range request" support
3. Introduce context function
This PR only moves code, doesn't change any logic.
Close #24213
Replace #23830
#### Cause
- Before, in order to making PR can get latest commit after reopening,
the `ref`(${REPO_PATH}/refs/pull/${PR_INDEX}/head) of evrey closed PR
will be updated when pushing commits to the `head branch` of the closed
PR.
#### Changes
- For closed PR , won't perform these behavior: insert`comment`, push
`notification` (UI and email), exectue
[pushToBaseRepo](7422503341/services/pull/pull.go (L409))
function and trigger `action` any more when pushing to the `head branch`
of the closed PR.
- Refresh the reference of the PR when reopening the closed PR (**even
if the head branch has been deleted before**). Make the reference of PR
consistent with the `head branch`.
Don't remember why the previous decision that `Code` and `Release` are
non-disable units globally. Since now every unit include `Code` could be
disabled, maybe we should have a new rule that the repo should have at
least one unit. So any unit could be disabled.
Fixes #20960
Fixes #7525
---------
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: yp05327 <576951401@qq.com>
Co-authored-by: @awkwardbunny
This PR adds a Debian package registry. You can follow [this
tutorial](https://www.baeldung.com/linux/create-debian-package) to build
a *.deb package for testing. Source packages are not supported at the
moment and I did not find documentation of the architecture "all" and
how these packages should be treated.
---------
Co-authored-by: Brian Hong <brian@hongs.me>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
There was only one `IsRepositoryExist` function, it did: `has && isDir`
However it's not right, and it would cause 500 error when creating a new
repository if the dir exists.
Then, it was changed to `has || isDir`, it is still incorrect, it
affects the "adopt repo" logic.
To make the logic clear:
* IsRepositoryModelOrDirExist
* IsRepositoryModelExist
Enable [forbidigo](https://github.com/ashanbrown/forbidigo) linter which
forbids print statements. Will check how to integrate this with the
smallest impact possible, so a few `nolint` comments will likely be
required. Plan is to just go through the issues and either:
- Remove the print if it is nonsensical
- Add a `//nolint` directive if it makes sense
I don't plan on investigating the individual issues any further.
<details>
<summary>Initial Lint Results</summary>
```
modules/log/event.go:348:6: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
modules/log/event.go:382:6: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
modules/queue/unique_queue_disk_channel_test.go:20:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("TempDir %s\n", tmpDir)
^
contrib/backport/backport.go:168:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Backporting %s to %s as %s\n", pr, localReleaseBranch, backportBranch)
^
contrib/backport/backport.go:216:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Navigate to %s to open PR\n", url)
^
contrib/backport/backport.go:223:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `xdg-open %s`\n", url)
^
contrib/backport/backport.go:233:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git push -u %s %s`\n", remote, backportBranch)
^
contrib/backport/backport.go:243:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Amending commit to prepend `Backport #%s` to body\n", pr)
^
contrib/backport/backport.go:272:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("* Attempting git cherry-pick --continue")
^
contrib/backport/backport.go:281:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Attempting git cherry-pick %s\n", sha)
^
contrib/backport/backport.go:297:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Current branch is %s\n", currentBranch)
^
contrib/backport/backport.go:299:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Current branch is %s - not checking out\n", currentBranch)
^
contrib/backport/backport.go:304:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* Branch %s already exists. Checking it out...\n", backportBranch)
^
contrib/backport/backport.go:308:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git checkout -b %s %s`\n", backportBranch, releaseBranch)
^
contrib/backport/backport.go:313:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git fetch %s main`\n", remote)
^
contrib/backport/backport.go:316:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
contrib/backport/backport.go:319:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
contrib/backport/backport.go:321:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("* `git fetch %s %s`\n", remote, releaseBranch)
^
contrib/backport/backport.go:324:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
contrib/backport/backport.go:327:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(string(out))
^
models/unittest/fixtures.go:50:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Unsupported RDBMS for integration tests")
^
models/unittest/fixtures.go:89:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("LoadFixtures failed after retries: %v\n", err)
^
models/unittest/fixtures.go:110:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Failed to generate sequence update: %v\n", err)
^
models/unittest/fixtures.go:117:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Failed to update sequence: %s Error: %v\n", value, err)
^
models/migrations/base/tests.go:118:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Environment variable $GITEA_ROOT not set")
^
models/migrations/base/tests.go:127:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Could not find gitea binary at %s\n", setting.AppPath)
^
models/migrations/base/tests.go:134:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Environment variable $GITEA_CONF not set - defaulting to %s\n", giteaConf)
^
models/migrations/base/tests.go:145:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to create temporary data path %v\n", err)
^
models/migrations/base/tests.go:154:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to InitFull: %v\n", err)
^
models/migrations/v1_11/v112.go:34:5: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Error: %v", err)
^
contrib/fixtures/fixture_generation.go:36:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("CreateTestEngine: %+v", err)
^
contrib/fixtures/fixture_generation.go:40:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("PrepareTestDatabase: %+v\n", err)
^
contrib/fixtures/fixture_generation.go:46:5: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("generate '%s': %+v\n", r, err)
^
contrib/fixtures/fixture_generation.go:53:5: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("generate '%s': %+v\n", g.name, err)
^
contrib/fixtures/fixture_generation.go:71:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s created.\n", path)
^
services/gitdiff/gitdiff_test.go:543:2: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println(result)
^
services/gitdiff/gitdiff_test.go:560:2: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println(result)
^
services/gitdiff/gitdiff_test.go:577:2: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println(result)
^
modules/web/routing/logger_manager.go:34:2: use of `print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
print Printer
^
modules/doctor/paths.go:109:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Warning: can't remove temporary file: '%s'\n", tmpFile.Name())
^
tests/test_utils.go:33:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf(format+"\n", args...)
^
tests/test_utils.go:61:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Environment variable $GITEA_CONF not set, use default: %s\n", giteaConf)
^
cmd/actions.go:54:9: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
_, _ = fmt.Printf("%s\n", respText)
^
cmd/admin_user_change_password.go:74:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s's password has been successfully updated!\n", user.Name)
^
cmd/admin_user_create.go:109:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("generated random password is '%s'\n", password)
^
cmd/admin_user_create.go:164:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Access token was successfully created... %s\n", t.Token)
^
cmd/admin_user_create.go:167:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("New user '%s' has been successfully created!\n", username)
^
cmd/admin_user_generate_access_token.go:74:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s\n", t.Token)
^
cmd/admin_user_generate_access_token.go:76:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Access token was successfully created: %s\n", t.Token)
^
cmd/admin_user_must_change_password.go:56:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Updated %d users setting MustChangePassword to %t\n", n, mustChangePassword)
^
cmd/convert.go:44:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Converted successfully, please confirm your database's character set is now utf8mb4")
^
cmd/convert.go:50:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Converted successfully, please confirm your database's all columns character is NVARCHAR now")
^
cmd/convert.go:52:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("This command can only be used with a MySQL or MSSQL database")
^
cmd/doctor.go:104:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
cmd/doctor.go:105:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Check if you are using the right config file. You can use a --config directive to specify one.")
^
cmd/doctor.go:243:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(err)
^
cmd/embedded.go:154:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(a.path)
^
cmd/embedded.go:198:3: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("Using app.ini at", setting.CustomConf)
^
cmd/embedded.go:217:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Extracting to %s:\n", destdir)
^
cmd/embedded.go:253:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s already exists; skipped.\n", dest)
^
cmd/embedded.go:275:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(dest)
^
cmd/generate.go:63:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", internalToken)
^
cmd/generate.go:66:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("\n")
^
cmd/generate.go:78:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", JWTSecretBase64)
^
cmd/generate.go:81:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("\n")
^
cmd/generate.go:93:2: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", secretKey)
^
cmd/generate.go:96:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("\n")
^
cmd/keys.go:74:2: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println(strings.TrimSpace(authorizedString))
^
cmd/mailer.go:32:4: use of `fmt.Print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Print("warning: Content is empty")
^
cmd/mailer.go:35:3: use of `fmt.Print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Print("Proceed with sending email? [Y/n] ")
^
cmd/mailer.go:40:4: use of `fmt.Println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Println("The mail was not sent")
^
cmd/mailer.go:49:9: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
_, _ = fmt.Printf("Sent %s email(s) to all users\n", respText)
^
cmd/serv.go:147:3: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Gitea: SSH has been disabled")
^
cmd/serv.go:153:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("error showing subcommand help: %v\n", err)
^
cmd/serv.go:175:4: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Hi there! You've successfully authenticated with the deploy key named " + key.Name + ", but Gitea does not provide shell access.")
^
cmd/serv.go:177:4: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Hi there! You've successfully authenticated with the principal " + key.Content + ", but Gitea does not provide shell access.")
^
cmd/serv.go:179:4: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("Hi there, " + user.Name + "! You've successfully authenticated with the key named " + key.Name + ", but Gitea does not provide shell access.")
^
cmd/serv.go:181:3: use of `println` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
println("If this is unexpected, please log in with password and setup Gitea under another user.")
^
cmd/serv.go:196:5: use of `fmt.Print` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Print(`{"type":"gitea","version":1}`)
^
tests/e2e/e2e_test.go:54:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Error initializing test database: %v\n", err)
^
tests/e2e/e2e_test.go:63:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("util.RemoveAll: %v\n", err)
^
tests/e2e/e2e_test.go:67:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to remove repo indexer: %v\n", err)
^
tests/e2e/e2e_test.go:109:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%v", stdout.String())
^
tests/e2e/e2e_test.go:110:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%v", stderr.String())
^
tests/e2e/e2e_test.go:113:6: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%v", stdout.String())
^
tests/integration/integration_test.go:124:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Error initializing test database: %v\n", err)
^
tests/integration/integration_test.go:135:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("util.RemoveAll: %v\n", err)
^
tests/integration/integration_test.go:139:3: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("Unable to remove repo indexer: %v\n", err)
^
tests/integration/repo_test.go:357:4: use of `fmt.Printf` forbidden by pattern `^(fmt\.Print(|f|ln)|print|println)$` (forbidigo)
fmt.Printf("%s", resp.Body)
^
```
</details>
---------
Co-authored-by: Giteabot <teabot@gitea.io>
If a comment dismisses a review, we need to load the reviewer to show
whose review has been dismissed.
Related to:
20b6ae0e53/templates/repo/issue/view_content/comments.tmpl (L765-L770)
We don't need `.Review.Reviewer` for all comments, because
"dismissing" doesn't happen often, or we would have already received
error reports.
Before, there was a `log/buffer.go`, but that design is not general, and
it introduces a lot of irrelevant `Content() (string, error) ` and
`return "", fmt.Errorf("not supported")` .
And the old `log/buffer.go` is difficult to use, developers have to
write a lot of `Contains` and `Sleep` code.
The new `LogChecker` is designed to be a general approach to help to
assert some messages appearing or not appearing in logs.
Close #24195
Some of the changes are taken from my another fix
f07b0de997
in #20147 (although that PR was discarded ....)
The bug is:
1. The old code doesn't handle `removedfile` event correctly
2. The old code doesn't provide attachments for type=CommentTypeReview
This PR doesn't intend to refactor the "upload" code to a perfect state
(to avoid making the review difficult), so some legacy styles are kept.
---------
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Giteabot <teabot@gitea.io>
Close: #23738
The actual cause of `500 Internal Server Error` in the issue is not what
is descirbed in the issue.
The actual cause is that after deleting team, if there is a PR which has
requested reivew from the deleted team, the comment could not match with
the deleted team by `assgin_team_id`. So the value of `.AssigneeTeam`
(see below code block) is `nil` which cause `500 error`.
1c8bc4081a/templates/repo/issue/view_content/comments.tmpl (L691-L695)
To fix this bug, there are the following problems to be resolved:
- [x] 1. ~~Stroe the name of the team in `content` column when inserting
`comment` into DB in case that we cannot get the name of team after it
is deleted. But for comments that already exist, just display "Unknown
Team"~~ Just display "Ghost Team" in the comment if the assgined team is
deleted.
- [x] 2. Delete the PR&team binding (the row of which `review_team_id =
${team_id} ` in table `review`) when deleting team.
- [x] 3.For already exist and undeleted binding rows in in table
`review`, ~~we can delete these rows when executing migrations.~~ they
do not affect the function, so won't delete them.
This allows for usernames, and emails connected to them to be reserved
and not reused.
Use case, I manage an instance with open registration, and sometimes
when users are deleted for spam (or other purposes), their usernames are
freed up and they sign up again with the same information.
This could also be used to reserve usernames, and block them from being
registered (in case an instance would like to block certain things
without hardcoding the list in code and compiling from scratch).
This is an MVP, that will allow for future work where you can set
something as reserved via the interface.
---------
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
As promised in #23817, I have this made a PR to make Release Download
URLs predictable. It currently follows the schema
`<repo>/releases/download/<tag>/<filename>`. this already works, but it
is nowhere shown in the UI or the API. The Problem is, that it is
currently possible to have multiple files with the same name (why do we
even allow this) for a release. I had written some Code to check, if a
Release has 2 or more files with the same Name. If yes, it uses the old
`attachments/<uuid>` URlL if no it uses the new fancy URL.
I had also changed `<repo>/releases/download/<tag>/<filename>` to
directly serve the File instead of redirecting, so people who who use
automatic update checker don't end up with the `attachments/<uuid>` URL.
Fixes #10919
---------
Co-authored-by: a1012112796 <1012112796@qq.com>
Without this patch, the setting SSH.StartBuiltinServer decides whether
the native (Go) implementation is used rather than calling 'ssh-keygen'.
It's possible for 'using ssh-keygen' and 'using the built-in server' to
be independent.
In fact, the gitea rootless container doesn't ship ssh-keygen and can be
configured to use the host's SSH server - which will cause the public
key parsing mechanism to break.
This commit changes the decision to be based on SSH.KeygenPath instead.
Any existing configurations with a custom KeygenPath set will continue
to function. The new default value of '' selects the native version. The
downside of this approach is that anyone who has relying on plain
'ssh-keygen' to have special properties will now be using the native
version instead.
I assume the exec-variant is only there because /x/crypto/ssh didn't
support ssh-ed25519 until 2016. I don't see any other reason for using
it so it might be an acceptable risk.
Fixes #23363
EDIT: this message was garbled when I tried to get the commit
description back in.. Trying to reconstruct it:
## ⚠️ BREAKING ⚠️ Users who don't have SSH.KeygenPath
explicitly set and rely on the ssh-keygen binary need to set
SSH.KeygenPath to 'ssh-keygen' in order to be able to continue using it
for public key parsing.
There was something else but I can't remember at the moment.
EDIT2: It was about `make test` and `make lint`. Can't get them to run.
To reproduce the issue, I installed `golang` in `docker.io/node:16` and
got:
```
...
go: mvdan.cc/xurls/v2@v2.4.0: unknown revision mvdan.cc/xurls/v2.4.0
go: gotest.tools/v3@v3.4.0: unknown revision gotest.tools/v3.4.0
...
go: gotest.tools/v3@v3.0.3: unknown revision gotest.tools/v3.0.3
...
go: error loading module requirements
```
Signed-off-by: Leon M. Busch-George <leon@georgemail.eu>
Right now the authors search dropdown might take a long time to load if
amount of authors is huge.
Example: (In the video below, there are about 10000 authors, and it
takes about 10 seconds to open the author dropdown)
https://user-images.githubusercontent.com/17645053/229422229-98aa9656-3439-4f8c-9f4e-83bd8e2a2557.mov
Possible improvements can be made, which will take 2 steps (Thanks to
@wolfogre for advice):
Step 1:
Backend: Add a new api, which returns a limit of 30 posters with matched
prefix.
Frontend: Change the search behavior from frontend search(fomantic
search) to backend search(when input is changed, send a request to get
authors matching the current search prefix)
Step 2:
Backend: Optimize the api in step 1 using indexer to support fuzzy
search.
This PR is implements the first step. The main changes:
1. Added api: `GET /{type:issues|pulls}/posters` , which return a limit
of 30 users with matched prefix (prefix sent as query). If
`DEFAULT_SHOW_FULL_NAME` in `custom/conf/app.ini` is set to true, will
also include fullnames fuzzy search.
2. Added a tooltip saying "Shows a maximum of 30 users" to the author
search dropdown
3. Change the search behavior from frontend search to backend search
After:
https://user-images.githubusercontent.com/17645053/229430960-f88fafd8-fd5d-4f84-9df2-2677539d5d08.mov
Fixes: https://github.com/go-gitea/gitea/issues/22586
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
There is no fork concept in agit flow, anyone with read permission can
push `refs/for/<target-branch>/<topic-branch>` to the repo. So we should
treat it as a fork pull request because it may be from an untrusted
user.
At first, we have one unified team unit permission which is called
`Team.Authorize` in DB.
But since https://github.com/go-gitea/gitea/pull/17811, we allowed
different units to have different permission.
The old code is only designed for the old version. So after #17811, if
org users have write permission of other units, but have no permission
of packages, they can also get write permission of packages.
Co-authored-by: delvh <dev.lh@web.de>
The old code just parses an invalid key to `TypeInvalid` and uses it as
normal, and duplicate keys will be kept.
So this PR will ignore invalid key and log warning and also deduplicate
valid units.
I neglected that the `NameKey` of `Unit` is not only for translation,
but also configuration. So it should be `repo.actions` to maintain
consistency.
## ⚠️ BREAKING ⚠️
If users already use `actions.actions` in `DISABLED_REPO_UNITS` or
`DEFAULT_REPO_UNITS`, it will be treated as an invalid unit key.
Adds API endpoints to manage issue/PR dependencies
* `GET /repos/{owner}/{repo}/issues/{index}/blocks` List issues that are
blocked by this issue
* `POST /repos/{owner}/{repo}/issues/{index}/blocks` Block the issue
given in the body by the issue in path
* `DELETE /repos/{owner}/{repo}/issues/{index}/blocks` Unblock the issue
given in the body by the issue in path
* `GET /repos/{owner}/{repo}/issues/{index}/dependencies` List an
issue's dependencies
* `POST /repos/{owner}/{repo}/issues/{index}/dependencies` Create a new
issue dependencies
* `DELETE /repos/{owner}/{repo}/issues/{index}/dependencies` Remove an
issue dependency
Closes https://github.com/go-gitea/gitea/issues/15393
Closes #22115
Co-authored-by: Andrew Thornton <art27@cantab.net>
Since #23493 has conflicts with latest commits, this PR is my proposal
for fixing #23371
Details are in the comments
And refactor the `modules/options` module, to make it always use
"filepath" to access local files.
Benefits:
* No need to do `util.CleanPath(strings.ReplaceAll(p, "\\", "/"))),
"/")` any more (not only one before)
* The function behaviors are clearly defined
A part of https://github.com/go-gitea/gitea/pull/22865
At first, I think we do not need 3 ProjectTypes, as we can check user
type, but it seems that it is not database friendly.
---------
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
This PR fixes the tags sort issue mentioned in #23432
The tags on dropdown shoud be sorted in descending order of time but are
not. Because when getting tags, it execeutes `git tag sort
--sort=-taggerdate`. Git supports two types of tags: lightweight and
annotated, and `git tag sort --sort=-taggerdate` dosen't work with
lightweight tags, which will not give correct result. This PR add
`GetTagNamesByRepoID ` to get tags from the database so the tags are
sorted.
Also adapt this change to the droplist when comparing branches.
Dropdown places:
<img width="369" alt="截屏2023-03-15 14 25 39"
src="https://user-images.githubusercontent.com/17645053/225224506-65a72e50-4c11-41d7-8187-a7e9c7dab2cb.png">
<img width="675" alt="截屏2023-03-15 14 25 27"
src="https://user-images.githubusercontent.com/17645053/225224526-65ce8008-340c-43f6-aa65-b6bd9e1a1bf1.png">
`namedBlob` turned out to be a poor imitation of a `TreeEntry`. Using
the latter directly shortens this code.
This partially undoes https://github.com/go-gitea/gitea/pull/23152/,
which I found a merge conflict with, and also expands the test it added
to cover the subtle README-in-a-subfolder case.
this is a simple endpoint that adds the ability to rename users to the
admin API.
Note: this is not in a mergeable state. It would be better if this was
handled by a PATCH/POST to the /api/v1/admin/users/{username} endpoint
and the username is modified.
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Fixes https://github.com/go-gitea/gitea/issues/22676
Context Data `IsOrganizationMember` and `IsOrganizationOwner` is used to
control the visibility of `people` and `team` tab.
2871ea0809/templates/org/menu.tmpl (L19-L40)
And because of the reuse of user projects page, User Context is changed
to Organization Context. But the value of `IsOrganizationMember` and
`IsOrganizationOwner` are not being given.
I reused func `HandleOrgAssignment` to add them to the ctx, but may have
some unnecessary variables, idk whether it is ok.
I found there is a missing `PageIsViewProjects` at create project page.
Add test coverage to the important features of
[`routers.web.repo.renderReadmeFile`](067b0c2664/routers/web/repo/view.go (L273));
namely that:
- it can handle looking in docs/, .gitea/, and .github/
- it can handle choosing between multiple competing READMEs
- it prefers the localized README to the markdown README to the
plaintext README
- it can handle broken symlinks when processing all the options
- it uses the name of the symlink, not the name of the target of the
symlink
Replace #23350.
Refactor `setting.Database.UseMySQL` to
`setting.Database.Type.IsMySQL()`.
To avoid mismatching between `Type` and `UseXXX`.
This refactor can fix the bug mentioned in #23350, so it should be
backported.
`renderReadmeFile` needs `readmeTreelink` as parameter but gets
`treeLink`.
The values of them look like as following:
`treeLink`: `/{OwnerName}/{RepoName}/src/branch/{BranchName}`
`readmeTreelink`:
`/{OwnerName}/{RepoName}/src/branch/{BranchName}/{ReadmeFileName}`
`path.Dir` in
8540fc45b1/routers/web/repo/view.go (L316)
should convert `readmeTreelink` into
`/{OwnerName}/{RepoName}/src/branch/{BranchName}` instead of the current
`/{OwnerName}/{RepoName}/src/branch`.
Fixes #23151
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Extract from #11669 and enhancement to #22585 to support exclusive
scoped labels in label templates
* Move label template functionality to label module
* Fix handling of color codes
* Add Advanced label template
When a change is pushed to the default branch and many pull requests are
open for that branch, conflict checking can take some time.
Previously it would go from oldest to newest pull request. Now
prioritize pull requests that are likely being actively worked on or
prepared for merging.
This only changes the order within one push to one repository, but the
change is trivial and can already be quite helpful for smaller Gitea
instances where a few repositories have most pull requests. A global
order would require deeper changes to queues.
The name of the job or step comes from the workflow file, while the name
of the runner comes from its registration. If the strings used for these
names are too long, they could cause db issues.
GetActiveStopwatch & HasUserStopwatch is a hot piece of code that is
repeatedly called and on examination of the cpu profile for TestGit it
represents 0.44 seconds of CPU time. This PR reduces this time to 80ms.
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <leon@kske.dev>
This includes pull requests that you approved, requested changes or
commented on. Currently such pull requests are not visible in any of the
filters on /pulls, while they may need further action like merging, or
prodding the author or reviewers.
Especially when working with a large team on a repository it's helpful
to get a full overview of pull requests that may need your attention,
without having to sift through the complete list.
Unfortunately xorm's `builder.Select(...).From(...)` does not escape the
table names. This is mostly not a problem but is a problem with the
`user` table.
This PR simply escapes the user table. No other uses of `From("user")`
where found in the codebase so I think this should be all that is
needed.
Fix #23064
Signed-off-by: Andrew Thornton <art27@cantab.net>
Partially fix #23050
After #22294 merged, it always has a warning log like `cannot get
context cache` when starting up. This should not affect any real life
but it's annoying. This PR will fix the problem. That means when
starting up, getting the system settings will not try from the cache but
will read from the database directly.
---------
Co-authored-by: Lauris BH <lauris@nix.lv>
I found `FindAndCount` which can `Find` and `Count` in the same time
Maybe it is better to use it in `FindProjects`
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
minio/sha256-simd provides additional acceleration for SHA256 using
AVX512, SHA Extensions for x86 and ARM64 for ARM.
It provides a drop-in replacement for crypto/sha256 and if the
extensions are not available it falls back to standard crypto/sha256.
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Ensure that issue pullrequests are loaded before trying to set the
self-reference.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <leon@kske.dev>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
The API to create tokens is missing the ability to set the required
scopes for tokens, and to show them on the API and on the UI.
This PR adds this functionality.
Signed-off-by: Andrew Thornton <art27@cantab.net>
During the recent hash algorithm change it became clear that the choice
of password hash algorithm plays a role in the time taken for CI to run.
Therefore as attempt to improve CI we should consider using a dummy
hashing algorithm instead of a real hashing algorithm.
This PR creates a dummy algorithm which is then set as the default
hashing algorithm during tests that use the fixtures. This hopefully
will cause a reduction in the time it takes for CI to run.
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Some bugs caused by less unit tests in fundamental packages. This PR
refactor `setting` package so that create a unit test will be easier
than before.
- All `LoadFromXXX` files has been splited as two functions, one is
`InitProviderFromXXX` and `LoadCommonSettings`. The first functions will
only include the code to create or new a ini file. The second function
will load common settings.
- It also renames all functions in setting from `newXXXService` to
`loadXXXSetting` or `loadXXXFrom` to make the function name less
confusing.
- Move `XORMLog` to `SQLLog` because it's a better name for that.
Maybe we should finally move these `loadXXXSetting` into the `XXXInit`
function? Any idea?
---------
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: delvh <dev.lh@web.de>
This PR refactors and improves the password hashing code within gitea
and makes it possible for server administrators to set the password
hashing parameters
In addition it takes the opportunity to adjust the settings for `pbkdf2`
in order to make the hashing a little stronger.
The majority of this work was inspired by PR #14751 and I would like to
thank @boppy for their work on this.
Thanks to @gusted for the suggestion to adjust the `pbkdf2` hashing
parameters.
Close #14751
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Add a new "exclusive" option per label. This makes it so that when the
label is named `scope/name`, no other label with the same `scope/`
prefix can be set on an issue.
The scope is determined by the last occurence of `/`, so for example
`scope/alpha/name` and `scope/beta/name` are considered to be in
different scopes and can coexist.
Exclusive scopes are not enforced by any database rules, however they
are enforced when editing labels at the models level, automatically
removing any existing labels in the same scope when either attaching a
new label or replacing all labels.
In menus use a circle instead of checkbox to indicate they function as
radio buttons per scope. Issue filtering by label ensures that only a
single scoped label is selected at a time. Clicking with alt key can be
used to remove a scoped label, both when editing individual issues and
batch editing.
Label rendering refactor for consistency and code simplification:
* Labels now consistently have the same shape, emojis and tooltips
everywhere. This includes the label list and label assignment menus.
* In label list, show description below label same as label menus.
* Don't use exactly black/white text colors to look a bit nicer.
* Simplify text color computation. There is no point computing luminance
in linear color space, as this is a perceptual problem and sRGB is
closer to perceptually linear.
* Increase height of label assignment menus to show more labels. Showing
only 3-4 labels at a time leads to a lot of scrolling.
* Render all labels with a new RenderLabel template helper function.
Label creation and editing in multiline modal menu:
* Change label creation to open a modal menu like label editing.
* Change menu layout to place name, description and colors on separate
lines.
* Don't color cancel button red in label editing modal menu.
* Align text to the left in model menu for better readability and
consistent with settings layout elsewhere.
Custom exclusive scoped label rendering:
* Display scoped label prefix and suffix with slightly darker and
lighter background color respectively, and a slanted edge between them
similar to the `/` symbol.
* In menus exclusive labels are grouped with a divider line.
---------
Co-authored-by: Yarden Shoham <hrsi88@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
Unfortunately #20896 does not completely prevent Data too long issues
and GPGKeyImport needs to be increased too.
Fix #22896
Signed-off-by: Andrew Thornton <art27@cantab.net>
Allow back-dating user creation via the `adminCreateUser` API operation.
`CreateUserOption` now has an optional field `created_at`, which can
contain a datetime-formatted string. If this field is present, the
user's `created_unix` database field will be updated to its value.
This is important for Blender's migration of users from Phabricator to
Gitea. There are many users, and the creation timestamp of their account
can give us some indication as to how long someone's been part of the
community.
The back-dating is done in a separate query that just updates the user's
`created_unix` field. This was the easiest and cleanest way I could
find, as in the initial `INSERT` query the field always is set to "now".
Fix #22797.
## Reason
If a comment was migrated from other platforms, this comment may have an
original author and its poster is always not the original author. When
the `roleDescriptor` func get the poster's role descriptor for a
comment, it does not check if the comment has an original author. So the
migrated comments' original authors might be marked as incorrect roles.
---------
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
To avoid duplicated load of the same data in an HTTP request, we can set
a context cache to do that. i.e. Some pages may load a user from a
database with the same id in different areas on the same page. But the
code is hidden in two different deep logic. How should we share the
user? As a result of this PR, now if both entry functions accept
`context.Context` as the first parameter and we just need to refactor
`GetUserByID` to reuse the user from the context cache. Then it will not
be loaded twice on an HTTP request.
But of course, sometimes we would like to reload an object from the
database, that's why `RemoveContextData` is also exposed.
The core context cache is here. It defines a new context
```go
type cacheContext struct {
ctx context.Context
data map[any]map[any]any
lock sync.RWMutex
}
var cacheContextKey = struct{}{}
func WithCacheContext(ctx context.Context) context.Context {
return context.WithValue(ctx, cacheContextKey, &cacheContext{
ctx: ctx,
data: make(map[any]map[any]any),
})
}
```
Then you can use the below 4 methods to read/write/del the data within
the same context.
```go
func GetContextData(ctx context.Context, tp, key any) any
func SetContextData(ctx context.Context, tp, key, value any)
func RemoveContextData(ctx context.Context, tp, key any)
func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error)
```
Then let's take a look at how `system.GetString` implement it.
```go
func GetSetting(ctx context.Context, key string) (string, error) {
return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) {
return cache.GetString(genSettingCacheKey(key), func() (string, error) {
res, err := GetSettingNoCache(ctx, key)
if err != nil {
return "", err
}
return res.SettingValue, nil
})
})
}
```
First, it will check if context data include the setting object with the
key. If not, it will query from the global cache which may be memory or
a Redis cache. If not, it will get the object from the database. In the
end, if the object gets from the global cache or database, it will be
set into the context cache.
An object stored in the context cache will only be destroyed after the
context disappeared.
As part of administration sometimes it is appropriate to forcibly tell
users to update their passwords.
This PR creates a new command `gitea admin user must-change-password`
which will set the `MustChangePassword` flag on the provided users.
Signed-off-by: Andrew Thornton <art27@cantab.net>
As discussed in #22847 the helpers in helpers.less need to have a
separate prefix as they are causing conflicts with fomantic styles
This will allow us to have the `.gt-hidden { display:none !important; }`
style that is needed to for the reverted PR.
Of note in doing this I have noticed that there was already a conflict
with at least one chroma style which this PR now avoids.
I've also added in the `gt-hidden` style that matches the tailwind one
and switched the code that needed it to use that.
Signed-off-by: Andrew Thornton <art27@cantab.net>
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Add setting to allow edits by maintainers by default, to avoid having to
often ask contributors to enable this.
This also reorganizes the pull request settings UI to improve clarity.
It was unclear which checkbox options were there to control available
merge styles and which merge styles they correspond to.
Now there is a "Merge Styles" label followed by the merge style options
with the same name as in other menus. The remaining checkboxes were
moved to the bottom, ordered rougly by typical order of operations.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
When we updated the .golangci.yml for 1.20 we should have used a string
as 1.20 is not a valid number.
In doing so we need to restore the nolint markings within the pq driver.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Original Issue: https://github.com/go-gitea/gitea/issues/22102
This addition would be a big benefit for design and art teams using the
issue tracking.
The preview will be the latest "image type" attachments on an issue-
simple, and allows for automatic updates of the cover image as issue
progress is made!
This would make Gitea competitive with Trello... wouldn't it be amazing
to say goodbye to Atlassian products? Ha.
First image is the most recent, the SQL will fetch up to 5 latest images
(URL string).
All images supported by browsers plus upcoming formats: *.avif *.bmp
*.gif *.jpg *.jpeg *.jxl *.png *.svg *.webp
The CSS will try to center-align images until it cannot, then it will
left align with overflow hidden. Single images get to be slightly
larger!
Tested so far on: Chrome, Firefox, Android Chrome, Android Firefox.
Current revision with light and dark themes:
![image](https://user-images.githubusercontent.com/24665/207066878-58e6bf73-0c93-4caa-8d40-38f4432b3578.png)
![image](https://user-images.githubusercontent.com/24665/207066555-293f65c3-e706-4888-8516-de8ec632d638.png)
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
In Go code, HTMLURL should be only used for external systems, like
API/webhook/mail/notification, etc.
If a URL is used by `Redirect` or rendered in a template, it should be a
relative URL (aka `Link()` in Gitea)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
I haven't tested `runs_list.tmpl` but I think it could be right.
After this PR, besides the `<meta .. HTMLURL>` in html head, the only
explicit HTMLURL usage is in `pull_merge_instruction.tmpl`, which
doesn't affect users too much and it's difficult to fix at the moment.
There are still many usages of `AppUrl` in the templates (eg: the
package help manual), they are similar problems as the HTMLURL in
pull_merge_instruction, and they might be fixed together in the future.
Diff without space:
https://github.com/go-gitea/gitea/pull/22831/files?diff=unified&w=1
Fixes #19555
Test-Instructions:
https://github.com/go-gitea/gitea/pull/21441#issuecomment-1419438000
This PR implements the mapping of user groups provided by OIDC providers
to orgs teams in Gitea. The main part is a refactoring of the existing
LDAP code to make it usable from different providers.
Refactorings:
- Moved the router auth code from module to service because of import
cycles
- Changed some model methods to take a `Context` parameter
- Moved the mapping code from LDAP to a common location
I've tested it with Keycloak but other providers should work too. The
JSON mapping format is the same as for LDAP.
![grafik](https://user-images.githubusercontent.com/1666336/195634392-3fc540fc-b229-4649-99ac-91ae8e19df2d.png)
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
partially fix #19345
This PR add some `Link` methods for different objects. The `Link`
methods are not different from `HTMLURL`, they are lack of the absolute
URL. And most of UI `HTMLURL` have been replaced to `Link` so that users
can visit them from a different domain or IP.
This PR also introduces a new javascript configuration
`window.config.reqAppUrl` which is different from `appUrl` which is
still an absolute url but the domain has been replaced to the current
requested domain.
Should call `PushToBaseRepo` before
`notification.NotifyPullRequestSynchronized`.
Or the notifier will get an old commit when reading branch
`pull/xxx/head`.
Found by ~#21937~ #22679.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR fixes two problems. One is when filter repository issues, only
repository level projects are listed. Another is if you list open
issues, only open projects will be displayed in filter options and if
you list closed issues, only closed projects will be displayed in filter
options.
In this PR, both repository level and org/user level projects will be
displayed in filter, and both open and closed projects will be listed as
filter items.
---------
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
Every user can already disable the filter manually, so the explicit
setting is absolutely useless and only complicates the logic.
Previously, there was also unexpected behavior when multiple query
parameters were present.
---------
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Most of the time forks are used for contributing code only, so not
having
issues, projects, release and packages is a better default for such
cases.
They can still be enabled in the settings.
A new option `DEFAULT_FORK_REPO_UNITS` is added to configure the default
units on forks.
Also add missing `repo.packages` unit to documentation.
code by: @brechtvl
## ⚠️ BREAKING ⚠️
When forking a repository, the fork will now have issues, projects,
releases, packages and wiki disabled. These can be enabled in the
repository settings afterwards. To change back to the previous default
behavior, configure `DEFAULT_FORK_REPO_UNITS` to be the same value as
`DEFAULT_REPO_UNITS`.
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Our trace logging is far from perfect and is difficult to follow.
This PR:
* Add trace logging for process manager add and remove.
* Fixes an errant read file for git refs in getMergeCommit
* Brings in the pullrequest `String` and `ColorFormat` methods
introduced in #22568
* Adds a lot more logging in to testPR etc.
Ref #22578
---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Fix #21994.
And fix #19470.
While generating new repo from a template, it does something like
"commit to git repo, re-fetch repo model from DB, and update default
branch if it's empty".
19d5b2f922/modules/repository/generate.go (L241-L253)
Unfortunately, when load repo from DB, the default branch will be set to
`setting.Repository.DefaultBranch` if it's empty:
19d5b2f922/models/repo/repo.go (L228-L233)
I believe it's a very old temporary patch but has been kept for many
years, see:
[2d2d85bb](https://github.com/go-gitea/gitea/commit/2d2d85bb#diff-1851799b06733db4df3ec74385c1e8850ee5aedee70b8b55366910d22725eea8)
I know it's a risk to delete it, may lead to potential behavioral
changes, but we cannot keep the outdated `FIXME` forever. On the other
hand, an empty `DefaultBranch` does make sense: an empty repo doesn't
have one conceptually (actually, Gitea will still set it to
`setting.Repository.DefaultBranch` to make it safer).
The error reported when a user passes a private ssh key as their ssh
public key is not very nice.
This PR improves this slightly.
Ref #22693
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
- Currently the function `GetUsersWhoCanCreateOrgRepo` uses a query that
is able to have duplicated users in the result, this is can happen under
the condition that a user is in team that either is the owner team or
has permission to create organization repositories.
- Add test code to simulate the above condition for user 3,
[`TestGetUsersWhoCanCreateOrgRepo`](a1fcb1cfb8/models/organization/org_test.go (L435))
is the test function that tests for this.
- The fix is quite trivial use a map keyed by user id in order to drop
duplicates.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Currently only a single project like milestone, not multiple like
labels.
Implements #14298
Code by @brechtvl
---------
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Instead of re-creating, these should use the available `Link` methods
from the "parent" of the project, which also take sub-urls into account.
Signed-off-by: jolheiser <john.olheiser@gmail.com>
This commit adds support for specifying comment types when importing
with `gitea restore-repo`. It makes it possible to import issue changes,
such as "title changed" or "assigned user changed".
An earlier version of this pull request was made by Matti Ranta, in
https://future.projects.blender.org/blender-migration/gitea-bf/pulls/3
There are two changes with regard to Matti's original code:
1. The comment type was an `int64` in Matti's code, and is now using a
string. This makes it possible to use `comment_type: title`, which is
more reliable and future-proof than an index into an internal list in
the Gitea Go code.
2. Matti's code also had support for including labels, but in a way that
would require knowing the database ID of the labels before the import
even starts, which is impossible. This can be solved by using label
names instead of IDs; for simplicity I I left that out of this PR.
When importing a repository via `gitea restore-repo`, external users
will get remapped to an admin user. This admin user is obtained via
`users.GetAdminUser()`, which unfortunately picks a more-or-less random
admin to return.
This makes it hard to predict which admin user will get assigned. This
patch orders the admin by ascending ID before choosing the first one,
i.e. it picks the admin with the lowest ID.
Even though it would be nicer to have full control over which user is
chosen, this at least gives us a predictable result.
This PR adds the support for scopes of access tokens, mimicking the
design of GitHub OAuth scopes.
The changes of the core logic are in `models/auth` that `AccessToken`
struct will have a `Scope` field. The normalized (no duplication of
scope), comma-separated scope string will be stored in `access_token`
table in the database.
In `services/auth`, the scope will be stored in context, which will be
used by `reqToken` middleware in API calls. Only OAuth2 tokens will have
granular token scopes, while others like BasicAuth will default to scope
`all`.
A large amount of work happens in `routers/api/v1/api.go` and the
corresponding `tests/integration` tests, that is adding necessary scopes
to each of the API calls as they fit.
- [x] Add `Scope` field to `AccessToken`
- [x] Add access control to all API endpoints
- [x] Update frontend & backend for when creating tokens
- [x] Add a database migration for `scope` column (enable 'all' access
to past tokens)
I'm aiming to complete it before Gitea 1.19 release.
Fixes #4300
This PR adds a task to the cron service to allow garbage collection of
LFS meta objects. As repositories may have a large number of
LFSMetaObjects, an updated column is added to this table and it is used
to perform a generational GC to attempt to reduce the amount of work.
(There may need to be a bit more work here but this is probably enough
for the moment.)
Fix #7045
Signed-off-by: Andrew Thornton <art27@cantab.net>
There is a mistake in the code for SearchRepositoryCondition where it
tests topics as a string. This is incorrect for postgres where topics is
cast and stored as json. topics needs to be cast to text for this to
work. (For some reason JSON_ARRAY_LENGTH does not work, so I have taken
the simplest solution of casting to text and doing a string comparison.)
Ref https://github.com/go-gitea/gitea/pull/21962#issuecomment-1379584057
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
This PR introduce glob match for protected branch name. The separator is
`/` and you can use `*` matching non-separator chars and use `**` across
separator.
It also supports input an exist or non-exist branch name as matching
condition and branch name condition has high priority than glob rule.
Should fix #2529 and #15705
screenshots
<img width="1160" alt="image"
src="https://user-images.githubusercontent.com/81045/205651179-ebb5492a-4ade-4bb4-a13c-965e8c927063.png">
Co-authored-by: zeripath <art27@cantab.net>
closes #13585
fixes #9067
fixes #2386
ref #6226
ref #6219
fixes #745
This PR adds support to process incoming emails to perform actions.
Currently I added handling of replies and unsubscribing from
issues/pulls. In contrast to #13585 the IMAP IDLE command is used
instead of polling which results (in my opinion 😉) in cleaner code.
Procedure:
- When sending an issue/pull reply email, a token is generated which is
present in the Reply-To and References header.
- IMAP IDLE waits until a new email arrives
- The token tells which action should be performed
A possible signature and/or reply gets stripped from the content.
I added a new service to the drone pipeline to test the receiving of
incoming mails. If we keep this in, we may test our outgoing emails too
in future.
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fix #22386
`GetDirectorySize` moved as `getDirectorySize` because it becomes a
special function which should not be put in `util`.
Co-authored-by: Jason Song <i@wolfogre.com>
- Move the file `compare.go` and `slice.go` to `slice.go`.
- Fix `ExistsInSlice`, it's buggy
- It uses `sort.Search`, so it assumes that the input slice is sorted.
- It passes `func(i int) bool { return slice[i] == target })` to
`sort.Search`, that's incorrect, check the doc of `sort.Search`.
- Conbine `IsInt64InSlice(int64, []int64)` and `ExistsInSlice(string,
[]string)` to `SliceContains[T]([]T, T)`.
- Conbine `IsSliceInt64Eq([]int64, []int64)` and `IsEqualSlice([]string,
[]string)` to `SliceSortedEqual[T]([]T, T)`.
- Add `SliceEqual[T]([]T, T)` as a distinction from
`SliceSortedEqual[T]([]T, T)`.
- Redesign `RemoveIDFromList([]int64, int64) ([]int64, bool)` to
`SliceRemoveAll[T]([]T, T) []T`.
- Add `SliceContainsFunc[T]([]T, func(T) bool)` and
`SliceRemoveAllFunc[T]([]T, func(T) bool)` for general use.
- Add comments to explain why not `golang.org/x/exp/slices`.
- Add unit tests.
Related to #22362.
I overlooked that there's always `committer.Close()`, like:
```go
ctx, committer, err := db.TxContext(db.DefaultContext)
if err != nil {
return nil
}
defer committer.Close()
// ...
if err != nil {
return nil
}
// ...
return committer.Commit()
```
So the `Close` of `halfCommitter` should ignore `commit and close`, it's
not a rollback.
See: [Why `halfCommitter` and `WithTx` should rollback IMMEDIATELY or
commit
LATER](https://github.com/go-gitea/gitea/pull/22366#issuecomment-1374778612).
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
After #22362, we can feel free to use transactions without
`db.DefaultContext`.
And there are still lots of models using `db.DefaultContext`, I think we
should refactor them carefully and one by one.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Unfortunately, #22295 introduced a bug that when set a cached system
setting, it will not affect.
This PR make sure to remove the cache key when updating a system
setting.
Fix #22332
Fix #22281
In #21621 , `Get[V]` and `Set[V]` has been introduced, so that cache
value will be `*Setting`. For memory cache it's OK. But for redis cache,
it can only store `string` for the current implementation. This PR
revert some of changes of that and just store or return a `string` for
system setting.
Previously, there was an `import services/webhooks` inside
`modules/notification/webhook`.
This import was removed (after fighting against many import cycles).
Additionally, `modules/notification/webhook` was moved to
`modules/webhook`,
and a few structs/constants were extracted from `models/webhooks` to
`modules/webhook`.
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
#18058 made a mistake. The disableGravatar's default value depends on
`OfflineMode`. If it's `true`, then `disableGravatar` is true, otherwise
it's `false`. But not opposite.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
If user has reached the maximum limit of repositories:
- Before
- disallow create
- allow fork without limit
- This patch:
- disallow create
- disallow fork
- Add option `ALLOW_FORK_WITHOUT_MAXIMUM_LIMIT` (Default **true**) :
enable this allow user fork repositories without maximum number limit
fixed https://github.com/go-gitea/gitea/issues/21847
Signed-off-by: Xinyu Zhou <i@sourcehut.net>
Some dbs require that all tables have primary keys, see
- #16802
- #21086
We can add a test to keep it from being broken again.
Edit:
~Added missing primary key for `ForeignReference`~ Dropped the
`ForeignReference` table to satisfy the check, so it closes #21086.
More context can be found in comments.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
The recent PR adding orphaned checks to the LFS storage is not
sufficient to completely GC LFS, as it is possible for LFSMetaObjects to
remain associated with repos but still need to be garbage collected.
Imagine a situation where a branch is uploaded containing LFS files but
that branch is later completely deleted. The LFSMetaObjects will remain
associated with the Repository but the Repository will no longer contain
any pointers to the object.
This PR adds a second doctor command to perform a full GC.
Signed-off-by: Andrew Thornton <art27@cantab.net>
depends on #22094
Fixes https://codeberg.org/forgejo/forgejo/issues/77
The old logic did not consider `is_internal`.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Close #14601
Fix #3690
Revive of #14601.
Updated to current code, cleanup and added more read/write checks.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andre Bruch <ab@andrebruch.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Norwin <git@nroo.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fix #22023
I've changed how the percentages for the language statistics are rounded
because they did not always add up to 100%
Now it's done with the largest remainder method, which makes sure that
total is 100%
Co-authored-by: Lauris BH <lauris@nix.lv>
When deleting a closed issue, we should update both `NumIssues`and
`NumClosedIssues`, or `NumOpenIssues`(`= NumIssues -NumClosedIssues`)
will be wrong. It's the same for pull requests.
Releated to #21557.
Alse fixed two harmless problems:
- The SQL to check issue/PR total numbers is wrong, that means it will
update the numbers even if they are correct.
- Replace legacy `num_issues = num_issues + 1` operations with
`UpdateRepoIssueNumbers`.
When getting tracked times out of the db and loading their attributes
handle not exist errors in a nicer way. (Also prevent an NPE.)
Fix #22006
Signed-off-by: Andrew Thornton <art27@cantab.net>
`hex.EncodeToString` has better performance than `fmt.Sprintf("%x",
[]byte)`, we should use it as much as possible.
I'm not an extreme fan of performance, so I think there are some
exceptions:
- `fmt.Sprintf("%x", func(...)[N]byte())`
- We can't slice the function return value directly, and it's not worth
adding lines.
```diff
func A()[20]byte { ... }
- a := fmt.Sprintf("%x", A())
- a := hex.EncodeToString(A()[:]) // invalid
+ tmp := A()
+ a := hex.EncodeToString(tmp[:])
```
- `fmt.Sprintf("%X", []byte)`
- `strings.ToUpper(hex.EncodeToString(bytes))` has even worse
performance.
Change all license headers to comply with REUSE specification.
Fix #16132
Co-authored-by: flynnnnnnnnnn <flynnnnnnnnnn@github>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Committer avatar rendered by `func AvatarByEmail` are not vertical align
as `func Avatar` does.
- Replace literals `ui avatar` and `ui avatar vm` with the constant
`DefaultAvatarClass`
When re-retrieving hook tasks from the DB double check if they have not
been delivered in the meantime. Further ensure that tasks are marked as
delivered when they are being delivered.
In addition:
* Improve the error reporting and make sure that the webhook task
population script runs in a separate goroutine.
* Only get hook task IDs out of the DB instead of the whole task when
repopulating the queue
* When repopulating the queue make the DB request paged
Ref #17940
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Unfortunately #21549 changed the name of Testcases without changing
their associated fixture directories.
Fix #21854
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR adds a context parameter to a bunch of methods. Some helper
`xxxCtx()` methods got replaced with the normal name now.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- It's possible that the `user_redirect` table contains a user id that
no longer exists.
- Delete a user redirect upon deleting the user.
- Add a check for these dangling user redirects to check-db-consistency.
The doctor check `storages` currently only checks the attachment
storage. This PR adds some basic garbage collection functionality for
the other types of storage.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fix #19513
This PR introduce a new db method `InTransaction(context.Context)`,
and also builtin check on `db.TxContext` and `db.WithTx`.
There is also a new method `db.AutoTx` has been introduced but could be used by other PRs.
`WithTx` will always open a new transaction, if a transaction exist in context, return an error.
`AutoTx` will try to open a new transaction if no transaction exist in context.
That means it will always enter a transaction if there is no error.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
Related #20471
This PR adds global quota limits for the package registry. Settings for
individual users/orgs can be added in a seperate PR using the settings
table.
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Close https://github.com/go-gitea/gitea/issues/21640
Before: Gitea can create users like ".xxx" or "x..y", which is not
ideal, it's already a consensus that dot filenames have special
meanings, and `a..b` is a confusing name when doing cross repo compare.
After: stricter
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
_This is a different approach to #20267, I took the liberty of adapting
some parts, see below_
## Context
In some cases, a weebhook endpoint requires some kind of authentication.
The usual way is by sending a static `Authorization` header, with a
given token. For instance:
- Matrix expects a `Bearer <token>` (already implemented, by storing the
header cleartext in the metadata - which is buggy on retry #19872)
- TeamCity #18667
- Gitea instances #20267
- SourceHut https://man.sr.ht/graphql.md#authentication-strategies (this
is my actual personal need :)
## Proposed solution
Add a dedicated encrypt column to the webhook table (instead of storing
it as meta as proposed in #20267), so that it gets available for all
present and future hook types (especially the custom ones #19307).
This would also solve the buggy matrix retry #19872.
As a first step, I would recommend focusing on the backend logic and
improve the frontend at a later stage. For now the UI is a simple
`Authorization` field (which could be later customized with `Bearer` and
`Basic` switches):
![2022-08-23-142911](https://user-images.githubusercontent.com/3864879/186162483-5b721504-eef5-4932-812e-eb96a68494cc.png)
The header name is hard-coded, since I couldn't fine any usecase
justifying otherwise.
## Questions
- What do you think of this approach? @justusbunsi @Gusted @silverwind
- ~~How are the migrations generated? Do I have to manually create a new
file, or is there a command for that?~~
- ~~I started adding it to the API: should I complete it or should I
drop it? (I don't know how much the API is actually used)~~
## Done as well:
- add a migration for the existing matrix webhooks and remove the
`Authorization` logic there
_Closes #19872_
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: delvh <dev.lh@web.de>
I found myself wondering whether a PR I scheduled for automerge was
actually merged. It was, but I didn't receive a mail notification for it
- that makes sense considering I am the doer and usually don't want to
receive such notifications. But ideally I want to receive a notification
when a PR was merged because I scheduled it for automerge.
This PR implements exactly that.
The implementation works, but I wonder if there's a way to avoid passing
the "This PR was automerged" state down so much. I tried solving this
via the database (checking if there's an automerge scheduled for this PR
when sending the notification) but that did not work reliably, probably
because sending the notification happens async and the entry might have
already been deleted. My implementation might be the most
straightforward but maybe not the most elegant.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The OAuth spec [defines two types of
client](https://datatracker.ietf.org/doc/html/rfc6749#section-2.1),
confidential and public. Previously Gitea assumed all clients to be
confidential.
> OAuth defines two client types, based on their ability to authenticate
securely with the authorization server (i.e., ability to
> maintain the confidentiality of their client credentials):
>
> confidential
> Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
> restricted access to the client credentials), or capable of secure
client authentication using other means.
>
> **public
> Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the resource
owner, such as an installed native application or a web browser-based
application), and incapable of secure client authentication via any
other means.**
>
> The client type designation is based on the authorization server's
definition of secure authentication and its acceptable exposure levels
of client credentials. The authorization server SHOULD NOT make
assumptions about the client type.
https://datatracker.ietf.org/doc/html/rfc8252#section-8.4
> Authorization servers MUST record the client type in the client
registration details in order to identify and process requests
accordingly.
Require PKCE for public clients:
https://datatracker.ietf.org/doc/html/rfc8252#section-8.1
> Authorization servers SHOULD reject authorization requests from native
apps that don't use PKCE by returning an error message
Fixes #21299
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
When actions besides "delete" are performed on issues, the milestone
counter is updated. However, since deleting issues goes through a
different code path, the associated milestone's count wasn't being
updated, resulting in inaccurate counts until another issue in the same
milestone had a non-delete action performed on it.
I verified this change fixes the inaccurate counts using a local docker
build.
Fixes #21254
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
At the moment a repository reference is needed for webhooks. With the
upcoming package PR we need to send webhooks without a repository
reference. For example a package is uploaded to an organization. In
theory this enables the usage of webhooks for future user actions.
This PR removes the repository id from `HookTask` and changes how the
hooks are processed (see `services/webhook/deliver.go`). In a follow up
PR I want to remove the usage of the `UniqueQueue´ and replace it with a
normal queue because there is no reason to be unique.
Co-authored-by: 6543 <6543@obermui.de>
A lot of our code is repeatedly testing if individual errors are
specific types of Not Exist errors. This is repetitative and unnecesary.
`Unwrap() error` provides a common way of labelling an error as a
NotExist error and we can/should use this.
This PR has chosen to use the common `io/fs` errors e.g.
`fs.ErrNotExist` for our errors. This is in some ways not completely
correct as these are not filesystem errors but it seems like a
reasonable thing to do and would allow us to simplify a lot of our code
to `errors.Is(err, fs.ErrNotExist)` instead of
`package.IsErr...NotExist(err)`
I am open to suggestions to use a different base error - perhaps
`models/db.ErrNotExist` if that would be felt to be better.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
depends on #18871
Added some api integration tests to help testing of #18798.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Related:
* #21362
This PR uses a general and stable method to generate resource index (eg:
Issue Index, PR Index)
If the code looks good, I can add more tests
ps: please skip the diff, only have a look at the new code. It's
entirely re-written.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
For security reasons, all e-mail addresses starting with
non-alphanumeric characters were rejected. This is too broad and rejects
perfectly valid e-mail addresses. Only leading hyphens should be
rejected -- in all other cases e-mail address specification should
follow RFC 5322.
Co-authored-by: Andreas Fischer <_@ndreas.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
- Currently `repository.Num{Issues,Pulls}` weren't checked and could
become out-of-consistency. Adds these two checks to `CheckRepoStats`.
- Fix incorrect SQL query for `repository.NumClosedPulls`, the check
should be for `repo_num_pulls`.
- Reference: https://codeberg.org/Codeberg/Community/issues/696
Adds the settings pages to create OAuth2 apps also to the org settings
and allows to create apps for orgs.
Refactoring: the oauth2 related templates are shared for
instance-wide/org/user, and the backend code uses `OAuth2CommonHandlers`
to share code for instance-wide/org/user.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes #21250
Related #20414
Conan packages don't have to follow SemVer.
The migration fixes the setting for all existing Conan and Generic
(#20414) packages.
Adds GitHub-like pages to view watched repos and subscribed issues/PRs
This is my second try to fix this, but it is better than the first since
it doesn't uses a filter option which could be slow when accessing
`/issues` or `/pulls` and it shows both pulls and issues (the first try
is #17053).
Closes #16111
Replaces and closes #17053
![Screenshot](https://user-images.githubusercontent.com/80460567/134782937-3112f7da-425a-45b6-9511-5c9695aee896.png)
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This adds an api endpoint `/files` to PRs that allows to get a list of changed files.
built upon #18228, reviews there are included
closes https://github.com/go-gitea/gitea/issues/654
Co-authored-by: Anton Bracke <anton@ju60.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes #21206
If user and viewer are equal the method should return true.
Also the common organization check was wrong as `count` can never be
less then 0.
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
There is a mistake in the batched delete comments part of DeleteUser which causes some comments to not be deleted
The code incorrectly updates the `start` of the limit clause resulting in most comments not being deleted.
```go
if err = e.Where("type=? AND poster_id=?", issues_model.CommentTypeComment, u.ID).Limit(batchSize, start).Find(&comments); err != nil {
```
should be:
```go
if err = e.Where("type=? AND poster_id=?", issues_model.CommentTypeComment, u.ID).Limit(batchSize, 0).Find(&comments); err != nil {
```
Co-authored-by: zeripath <art27@cantab.net>
Add support for triggering webhook notifications on wiki changes.
This PR contains frontend and backend for webhook notifications on wiki actions (create a new page, rename a page, edit a page and delete a page). The frontend got a new checkbox under the Custom Event -> Repository Events section. There is only one checkbox for create/edit/rename/delete actions, because it makes no sense to separate it and others like releases or packages follow the same schema.
![image](https://user-images.githubusercontent.com/121972/177018803-26851196-831f-4fde-9a4c-9e639b0e0d6b.png)
The actions itself are separated, so that different notifications will be executed (with the "action" field). All the webhook receivers implement the new interface method (Wiki) and the corresponding tests.
When implementing this, I encounter a little bug on editing a wiki page. Creating and editing a wiki page is technically the same action and will be handled by the ```updateWikiPage``` function. But the function need to know if it is a new wiki page or just a change. This distinction is done by the ```action``` parameter, but this will not be sent by the frontend (on form submit). This PR will fix this by adding the ```action``` parameter with the values ```_new``` or ```_edit```, which will be used by the ```updateWikiPage``` function.
I've done integration tests with matrix and gitea (http).
![image](https://user-images.githubusercontent.com/121972/177018795-eb5cdc01-9ba3-483e-a6b7-ed0e313a71fb.png)
Fix #16457
Signed-off-by: Aaron Fischer <mail@aaron-fischer.net>
A testing cleanup.
This pull request replaces `os.MkdirTemp` with `t.TempDir`. We can use the `T.TempDir` function from the `testing` package to create temporary directory. The directory created by `T.TempDir` is automatically removed when the test and all its subtests complete.
This saves us at least 2 lines (error check, and cleanup) on every instance, or in some cases adds cleanup that we forgot.
Reference: https://pkg.go.dev/testing#T.TempDir
```go
func TestFoo(t *testing.T) {
// before
tmpDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(tmpDir)
// now
tmpDir := t.TempDir()
}
```
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
When migrating add several more important sanity checks:
* SHAs must be SHAs
* Refs must be valid Refs
* URLs must be reasonable
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <matti@mdranta.net>
In #21031 we have discovered that on very big tables postgres will use a
search involving the sort term in preference to the restrictive index.
Therefore we add another index for postgres and update the original migration.
Fix #21031
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix hard-coded timeout and error panic in API archive download endpoint
This commit updates the `GET /api/v1/repos/{owner}/{repo}/archive/{archive}`
endpoint which prior to this PR had a couple of issues.
1. The endpoint had a hard-coded 20s timeout for the archiver to complete after
which a 500 (Internal Server Error) was returned to client. For a scripted
API client there was no clear way of telling that the operation timed out and
that it should retry.
2. Whenever the timeout _did occur_, the code used to panic. This was caused by
the API endpoint "delegating" to the same call path as the web, which uses a
slightly different way of reporting errors (HTML rather than JSON for
example).
More specifically, `api/v1/repo/file.go#GetArchive` just called through to
`web/repo/repo.go#Download`, which expects the `Context` to have a `Render`
field set, but which is `nil` for API calls. Hence, a `nil` pointer error.
The code addresses (1) by dropping the hard-coded timeout. Instead, any
timeout/cancelation on the incoming `Context` is used.
The code addresses (2) by updating the API endpoint to use a separate call path
for the API-triggered archive download. This avoids producing HTML-errors on
errors (it now produces JSON errors).
Signed-off-by: Peter Gardfjäll <peter.gardfjall.work@gmail.com>
Adds a new option to only show relevant repo's on the explore page, for bigger Gitea instances like Codeberg this is a nice option to enable to make the explore page more populated with unique and "high" quality repo's. A note is shown that the results are filtered and have the possibility to see the unfiltered results.
Co-authored-by: vednoc <vednoc@protonmail.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
Unfortunately some keys are too big to fix within the 65535 limit of TEXT on MySQL
this causes issues with these large keys.
Therefore increase these fields to MEDIUMTEXT.
Fix #20894
Signed-off-by: Andrew Thornton <art27@cantab.net>
- Currently the function takes in the `UserID` option, but isn't being
used within the SQL query. This patch fixes that by checking that only
teams are being returned that the user belongs to.
Fix #20829
Co-authored-by: delvh <dev.lh@web.de>
Whilst looking at #20840 I noticed that the Mirrors data doesn't appear
to be being used therefore we can remove this and in fact none of the
related code is used elsewhere so it can also be removed.
Related #20840
Related #20804
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
In MirrorRepositoryList.loadAttributes there is some code to load the Mirror entries
from the database. This assumes that every Repository which has IsMirror set has
a Mirror associated in the DB. This association is incorrect in the case of
Mirror repository under creation when there is no Mirror entry in the DB until
completion.
Unfortunately LoadAttributes makes this incorrect assumption and presumes that a
Mirror will always be loaded. This then causes a panic.
This PR simply double checks if there a Mirror before attempting to link back to
its Repo. Unfortunately it should be expected that there may be other cases where
this incorrect assumption causes further problems.
Fix #20804
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* merge `CheckLFSVersion` into `InitFull` (renamed from `InitWithSyncOnce`)
* remove the `Once` during git init, no data-race now
* for doctor sub-commands, `InitFull` should only be called in initialization stage
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Added support for Pub packages.
* Update docs/content/doc/packages/overview.en-us.md
Co-authored-by: Gergely Nagy <algernon@users.noreply.github.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gergely Nagy <algernon@users.noreply.github.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
- Add a new push mirror to specific repository
- Sync now ( send all the changes to the configured push mirrors )
- Get list of all push mirrors of a repository
- Get a push mirror by ID
- Delete push mirror by ID
Signed-off-by: Mohamed Sekour <mohamed.sekour@exfo.com>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
WebAuthn have updated their specification to set the maximum size of the
CredentialID to 1023 bytes. This is somewhat larger than our current
size and therefore we need to migrate.
The PR changes the struct to add CredentialIDBytes and migrates the CredentialID string
to the bytes field before another migration drops the old CredentialID field. Another migration
renames this field back.
Fix #20457
Signed-off-by: Andrew Thornton <art27@cantab.net>
Sometimes users want to receive email notifications of messages they create or reply to,
Added an option to personal preferences to allow users to choose
Closes #20149
Was looking into the visibility checks because I need them for something different and noticed the checks are more complicated than they have to be.
The rule is just: user/org is visible if
- The doer is a member of the org, regardless of the org visibility
- The doer is not restricted and the user/org is public or limited
When viewing a subdirectory and the latest commit to that directory in
the table, the commit status icon incorrectly showed the status of the
HEAD commit instead of the latest for that directory.
* Fixes issue #19603 (Not able to merge commit in PR when branches content is same, but different commit id)
* fill HeadCommitID in PullRequest
* compare real commits ID as check for merging
* based on @zeripath patch in #19738