Targeting #14936, #15332
Adds a collaborator permissions API endpoint according to GitHub API: https://docs.github.com/en/rest/collaborators/collaborators#get-repository-permissions-for-a-user to retrieve a collaborators permissions for a specific repository.
### Checks the repository permissions of a collaborator.
`GET` `/repos/{owner}/{repo}/collaborators/{collaborator}/permission`
Possible `permission` values are `admin`, `write`, `read`, `owner`, `none`.
```json
{
"permission": "admin",
"role_name": "admin",
"user": {}
}
```
Where `permission` and `role_name` hold the same `permission` value and `user` is filled with the user API object. Only admins are allowed to use this API endpoint.
Adds a feature [like GitHub has](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) (step 7).
If you create a new PR from a forked repo, you can select (and change later, but only if you are the PR creator/poster) the "Allow edits from maintainers" option.
Then users with write access to the base branch get more permissions on this branch:
* use the update pull request button
* push directly from the command line (`git push`)
* edit/delete/upload files via web UI
* use related API endpoints
You can't merge PRs to this branch with this enabled, you'll need "full" code write permissions.
This feature has a pretty big impact on the permission system. I might forgot changing some things or didn't find security vulnerabilities. In this case, please leave a review or comment on this PR.
Closes #17728
Co-authored-by: 6543 <6543@obermui.de>
There is a potential rare race possible whereby the c.running channel could
be closed twice. Looking at the code I do not see a need for this c.running
channel and therefore I think we can remove this. (I think the c.running
might have been some attempt to prevent a hang but the use of os.Pipes should
prevent that.)
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
- Doing 64-bit atomic operations on 32-bit machines is a bit tricky by
golang, as they can only be done under certain set of
conditions(https://pkg.go.dev/sync/atomic#pkg-note-BUG).
- This PR fixes such case whereby the conditions weren't met, it moves
the int64 to the first field of the struct, which will 64-bit operations
happening on this property on 32-bit machines.
- Resolves #19518
Within doArchive there is a service goroutine that performs the
archiving function. This goroutine reports its error using a `chan
error` called `done`. Prior to this PR this channel had 0 capacity
meaning that the goroutine would block until the `done` channel was
cleared - however there are a couple of ways in which this channel might
not be read.
The simplest solution is to add a single space of capacity to the
goroutine which will mean that the goroutine will always complete and
even if the `done` channel is not read it will be simply garbage
collected away.
(The PR also contains two other places when setting up the indexers
which do not leak but where the blocking of the sending goroutine is
also unnecessary and so we should just add a small amount of capacity
and let the sending goroutine complete as soon as it can.)
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
If an `os/exec.Command` is passed non `*os.File` as an input/output, go
will create `os.Pipe`s and wait for their closure in `cmd.Wait()`. If
the code following this is responsible for closing `io.Pipe`s or other
handlers then on process death from context cancellation the `Wait` can
hang.
There are two possible solutions:
1. use `os.Pipe` as the input/output as `cmd.Wait` does not wait for these.
2. create a goroutine waiting on the context cancellation that will close the inputs.
This PR provides the second option - which is a simpler change that can
be more easily backported.
Closes #19448
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Set correct PR status on 3way on conflict checking
- When 3-way merge is enabled for conflict checking, it has a new
interesting behavior that it doesn't return any error when it found a
conflict, so we change the condition to not check for the error, but
instead check if conflictedfiles is populated, this fixes a issue
whereby PR status wasn't correctly on conflicted PR's.
- Refactor the mergeable property(which was incorrectly set and lead me this
bug) to be more maintainable.
- Add a dedicated test for conflicting checking, so it should prevent
future issues with this.
* Fix linter
When a mirror repo interval is updated by the UI it is rescheduled with that interval
however the API does not do this. The API also lacks the enable_prune option.
This PR adds this functionality in to the API Edit Repo endpoint.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Do a refactoring to the CSRF related code, remove most unnecessary functions.
Parse the generated token's issue time, regenerate the token every a few minutes.
Reusing `/api/v1` from Gitea UI Pages have pros and cons.
Pros:
1) Less code copy
Cons:
1) API/v1 have to support shared session with page requests.
2) You need to consider for each other when you want to change something about api/v1 or page.
This PR moves all dependencies to API/v1 from UI Pages.
Partially replace #16052
Remove two unmaintained vendor packages `i18n` and `paginater`. Changes:
* Rewrite `i18n` package with a more clear fallback mechanism. Fix an unstable `Tr` behavior, add more tests.
* Refactor the legacy `Paginater` to `Paginator`, test cases are kept unchanged.
Trivial enhancement (no breaking for end users):
* Use the first locale in LANGS setting option as the default, add a log to prevent from surprising users.
There appears to be an intermittent NPE in queue tests relating to the deferred
shutdown/terminate functions.
This PR more formally asserts that shutdown and termination occurs before starting
and finishing the tests but leaves the defer in place to ensure that if there is an
issue shutdown/termination will occur.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Follows: #19284
* The `CopyDir` is only used inside test code
* Rewrite `ToSnakeCase` with more test cases
* The `RedisCacher` only put strings into cache, here we use internal `toStr` to replace the legacy `ToStr`
* The `UniqueQueue` can use string as ID directly, no need to call `ToStr`
Right now, a pull-mirror repo does not get marked as such until *after* the
mirroring completes. In the meantime, it will show up (in API and UI) as a
regular repo.
The main purpose is to refactor the legacy `unknwon/com` package.
1. Remove most imports of `unknwon/com`, only `util/legacy.go` imports the legacy `unknwon/com`
2. Use golangci's depguard to process denied packages
3. Fix some incorrect values in golangci.yml, eg, the version should be quoted string `"1.18"`
4. Use correctly escaped content for `go-import` and `go-source` meta tags
5. Refactor `com.Expand` to our stable (and the same fast) `vars.Expand`, our `vars.Expand` can still return partially rendered content even if the template is not good (eg: key mistach).
Follows #19266, #8553, Close #18553, now there are only three `Run..(&RunOpts{})` functions.
* before: `stdout, err := RunInDir(path)`
* now: `stdout, _, err := RunStdString(&git.RunOpts{Dir:path})`
- Upgrade all JS dependencies minus vue and vue-loader
- Adapt to breaking change of octicons
- Update eslint rules
- Tested Swagger UI, sortablejs and prod build
Continues on from #19202.
Following the addition of pprof labels we can now more easily understand the relationship between a goroutine and the requests that spawn them.
This PR takes advantage of the labels and adds a few others, then provides a mechanism for the monitoring page to query the pprof goroutine profile.
The binary profile that results from this profile is immediately piped in to the google library for parsing this and then stack traces are formed for the goroutines.
If the goroutine is within a context or has been created from a goroutine within a process context it will acquire the process description labels for that process.
The goroutines are mapped with there associate pids and any that do not have an associated pid are placed in a group at the bottom as unbound.
In this way we should be able to more easily examine goroutines that have been stuck.
A manager command `gitea manager processes` is also provided that can export the processes (with or without stacktraces) to the command line.
Signed-off-by: Andrew Thornton <art27@cantab.net>
This addresses https://github.com/go-gitea/gitea/issues/18352
It aims to improve performance (and resource use) of the `SyncReleasesWithTags` operation for pull-mirrors.
For large repositories with many tags, `SyncReleasesWithTags` can be a costly operation (taking several minutes to complete). The reason is two-fold:
1. on sync, every upstream repo tag is compared (for changes) against existing local entries in the release table to ensure that they are up-to-date.
2. the procedure for getting _each tag_ involves a series of git operations
```bash
git show-ref --tags -- v8.2.4477
git cat-file -t 29ab6ce9f36660cffaad3c8789e71162e5db5d2f
git cat-file -p 29ab6ce9f36660cffaad3c8789e71162e5db5d2f
git rev-list --count 29ab6ce9f36660cffaad3c8789e71162e5db5d2f
```
of which the `git rev-list --count` can be particularly heavy.
This PR optimizes performance for pull-mirrors. We utilize the fact that a pull-mirror is always identical to its upstream and rebuild the entire release table on every sync and use a batch `git for-each-ref .. refs/tags` call to retrieve all tags in one go.
For large mirror repos, with hundreds of annotated tags, this brings down the duration of the sync operation from several minutes to a few seconds. A few unscientific examples run on my local machine:
- https://github.com/spring-projects/spring-boot (223 tags)
- before: `0m28,673s`
- after: `0m2,244s`
- https://github.com/kubernetes/kubernetes (890 tags)
- before: `8m00s`
- after: `0m8,520s`
- https://github.com/vim/vim (13954 tags)
- before: `14m20,383s`
- after: `0m35,467s`
I added a `foreachref` package which contains a flexible way of specifying which reference fields are of interest (`git-for-each-ref(1)`) and to produce a parser for the expected output. These could be reused in other places where `for-each-ref` is used. I'll add unit tests for those if the overall PR looks promising.
This follows
* https://github.com/go-gitea/gitea/issues/18553
Introduce `RunWithContextString` and `RunWithContextBytes` to help the refactoring. Add related unit tests. They keep the same behavior to save stderr into err.Error() as `RunInXxx` before.
Remove `RunInDirTimeoutPipeline` `RunInDirTimeoutFullPipeline` `RunInDirTimeout` `RunInDirTimeoutEnv` `RunInDirPipeline` `RunInDirFullPipeline` `RunTimeout`, `RunInDirTimeoutEnvPipeline`, `RunInDirTimeoutEnvFullPipeline`, `RunInDirTimeoutEnvFullPipelineFunc`.
Then remaining `RunInDir` `RunInDirBytes` `RunInDirWithEnv` can be easily refactored in next PR with a simple search & replace:
* before: `stdout, err := RunInDir(path)`
* next: `stdout, _, err := RunWithContextString(&git.RunContext{Dir:path})`
Other changes:
1. When `timeout <= 0`, use default. Because `timeout==0` is meaningless and could cause bugs. And now many functions becomes more simple, eg: `GitGcRepos` 9 lines to 1 line. `Fsck` 6 lines to 1 line.
2. Only set defaultCommandExecutionTimeout when the option `setting.Git.Timeout.Default > 0`
Gitea was not able to supply any authentication parameters to it. So this brings support to do that, along with some light extraction of a couple of bits into some separate functions for easier testing.
I looked at other libraries supporting similar RedisUri-style connection strings (e.g. Lettuce), but it looks like this type of configuration is beyond what would typically be done in a connection string. Since gitea doesn't have configuration options for manually specifying all this redis connection detail, I went ahead and just chose straightforward names for these new parameters.
Strangely #19038 appears to relate to an issue whereby a tag appears to
be listed in `git show-ref --tags` but then does not appear when `git
show-ref --tags -- short_name` is called.
As a solution though I propose to stop the second call as it is
unnecessary and only likely to cause problems.
I've also noticed that the tags calls are wildly inefficient and aren't using the common cat-files - so these have been added.
I've also noticed that the git commit-graph is not being written on mirroring - so I've also added writing this to the migration which should improve mirror rendering somewhat.
Fix #19038
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
The last PR about clone buttons introduced an JS error when visiting an empty repo page:
* https://github.com/go-gitea/gitea/pull/19028
* `Uncaught ReferenceError: isSSH is not defined`, because the variables are scoped and doesn't share between sub templates.
This:
1. Simplify `templates/repo/clone_buttons.tmpl` and make code clear
2. Move most JS code into `initRepoCloneLink`
3. Remove unused `CloneLink.Git`
4. Remove `ctx.Data["DisableSSH"] / ctx.Data["ExposeAnonSSH"] / ctx.Data["DisableHTTP"]`, and only set them when is is needed (eg: deploy keys / ssh keys)
5. Introduce `Data["CloneButton*"]` to provide data for clone buttons and links
6. Introduce `Data["RepoCloneLink"]` for the repo clone link (not the wiki)
7. Remove most `ctx.Data["PageIsWiki"]` because it has been set in the `/wiki` middleware
8. Remove incorrect `quickstart` class in `migrating.tmpl`