Mainly for MySQL/MSSQL.
It is important for Gitea to use case-sensitive database charset
collation. If the database is using a case-insensitive collation, Gitea
will show startup error/warning messages, and show the errors/warnings
on the admin panel's Self-Check page.
Make `gitea doctor convert` work for MySQL to convert the collations of
database & tables & columns.
* Fix #28131
## ⚠️ BREAKING ⚠️
It is not quite breaking, but it's highly recommended to convert the
database&table&column to a consistent and case-sensitive collation.
fix #28436.
the doc https://docs.gitea.com/usage/profile-readme maybe also need to
be updated to tell that
the main branch is necessary,which means the following three conditions
should be satisfied:
- repo: **.profile**
- branch: **[default branch]**
- markdown: **README.md**
Introduce the new generic deletion methods
- `func DeleteByID[T any](ctx context.Context, id int64) (int64, error)`
- `func DeleteByIDs[T any](ctx context.Context, ids ...int64) error`
- `func Delete[T any](ctx context.Context, opts FindOptions) (int64,
error)`
So, we no longer need any specific deletion method and can just use
the generic ones instead.
Replacement of #28450
Closes #28450
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The CORS code has been unmaintained for long time, and the behavior is
not correct.
This PR tries to improve it. The key point is written as comment in
code. And add more tests.
Fix #28515
Fix #27642
Fix #17098
Related to https://github.com/go-gitea/gitea/issues/28279
When merging artifact chunks, it lists chunks from storage. When storage
is minio, chunk's path contains `MINIO_BASE_PATH` that makes merging
break.
<del>So trim the `MINIO_BASE_PATH` when handle chunks.</del>
Update the chunk file's basename to retain necessary information. It
ensures that the directory in the chunk's path remains unaffected.
Nowadays, cache will be used on almost everywhere of Gitea and it cannot
be disabled, otherwise some features will become unaviable.
Then I think we can just remove the option for cache enable. That means
cache cannot be disabled.
But of course, we can still use cache configuration to set how should
Gitea use the cache.
The 4 functions are duplicated, especially as interface methods. I think
we just need to keep `MustID` the only one and remove other 3.
```
MustID(b []byte) ObjectID
MustIDFromString(s string) ObjectID
NewID(b []byte) (ObjectID, error)
NewIDFromString(s string) (ObjectID, error)
```
Introduced the new interfrace method `ComputeHash` which will replace
the interface `HasherInterface`. Now we don't need to keep two
interfaces.
Reintroduced `git.NewIDFromString` and `git.MustIDFromString`. The new
function will detect the hash length to decide which objectformat of it.
If it's 40, then it's SHA1. If it's 64, then it's SHA256. This will be
right if the commitID is a full one. So the parameter should be always a
full commit id.
@AdamMajer Please review.
- Modify the `Password` field in `CreateUserOption` struct to remove the
`Required` tag
- Update the `v1_json.tmpl` template to include the `email` field and
remove the `password` field
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
- Remove `ObjectFormatID`
- Remove function `ObjectFormatFromID`.
- Use `Sha1ObjectFormat` directly but not a pointer because it's an
empty struct.
- Store `ObjectFormatName` in `repository` struct
Refactor Hash interfaces and centralize hash function. This will allow
easier introduction of different hash function later on.
This forms the "no-op" part of the SHA256 enablement patch.
Recently Docker started to use the optional `POST /v2/token` endpoint
which should respond with a `404 Not Found` status code instead of the
current `405 Method Not Allowed`.
> Note: Not all token servers implement oauth2. If the request to the
endpoint returns 404 using the HTTP POST method, refer to Token
Documentation for using the HTTP GET method supported by all token
servers.
## Changes
- Add deprecation warning to `Token` and `AccessToken` authentication
methods in swagger.
- Add deprecation warning header to API response. Example:
```
HTTP/1.1 200 OK
...
Warning: token and access_token API authentication is deprecated
...
```
- Add setting `DISABLE_QUERY_AUTH_TOKEN` to reject query string auth
tokens entirely. Default is `false`
## Next steps
- `DISABLE_QUERY_AUTH_TOKEN` should be true in a subsequent release and
the methods should be removed in swagger
- `DISABLE_QUERY_AUTH_TOKEN` should be removed and the implementation of
the auth methods in question should be removed
## Open questions
- Should there be further changes to the swagger documentation?
Deprecation is not yet supported for security definitions (coming in
[OpenAPI Spec version
3.2.0](https://github.com/OAI/OpenAPI-Specification/issues/2506))
- Should the API router logger sanitize urls that use `token` or
`access_token`? (This is obviously an insufficient solution on its own)
---------
Co-authored-by: delvh <dev.lh@web.de>
Fix #28056
This PR will check whether the repo has zero branch when pushing a
branch. If that, it means this repository hasn't been synced.
The reason caused that is after user upgrade from v1.20 -> v1.21, he
just push branches without visit the repository user interface. Because
all repositories routers will check whether a branches sync is necessary
but push has not such check.
For every repository, it has two states, synced or not synced. If there
is zero branch for a repository, then it will be assumed as non-sync
state. Otherwise, it's synced state. So if we think it's synced, we just
need to update branch/insert new branch. Otherwise do a full sync. So
that, for every push, there will be almost no extra load added. It's
high performance than yours.
For the implementation, we in fact will try to update the branch first,
if updated success with affect records > 0, then all are done. Because
that means the branch has been in the database. If no record is
affected, that means the branch does not exist in database. So there are
two possibilities. One is this is a new branch, then we just need to
insert the record. Another is the branches haven't been synced, then we
need to sync all the branches into database.
It will fix #28268 .
<img width="1313" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/cb1e07d5-7a12-4691-a054-8278ba255bfc">
<img width="1318" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/4fd60820-97f1-4c2c-a233-d3671a5039e9">
## ⚠️ BREAKING ⚠️
But need to give up some features:
<img width="1312" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/281c0d51-0e7d-473f-bbed-216e2f645610">
However, such abandonment may fix #28055 .
## Backgroud
When the user switches the dashboard context to an org, it means they
want to search issues in the repos that belong to the org. However, when
they switch to themselves, it means all repos they can access because
they may have created an issue in a public repo that they don't own.
<img width="286" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/182dcd5b-1c20-4725-93af-96e8dfae5b97">
It's a confusing design. Think about this: What does "In your
repositories" mean when the user switches to an org? Repos belong to the
user or the org?
Whatever, it has been broken by #26012 and its following PRs. After the
PR, it searches for issues in repos that the dashboard context user owns
or has been explicitly granted access to, so it causes #28268.
## How to fix it
It's not really difficult to fix it. Just extend the repo scope to
search issues when the dashboard context user is the doer. Since the
user may create issues or be mentioned in any public repo, we can just
set `AllPublic` to true, which is already supported by indexers. The DB
condition will also support it in this PR.
But the real difficulty is how to count the search results grouped by
repos. It's something like "search issues with this keyword and those
filters, and return the total number and the top results. **Then, group
all of them by repo and return the counts of each group.**"
<img width="314" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/5206eb20-f8f5-49b9-b45a-1be2fcf679f4">
Before #26012, it was being done in the DB, but it caused the results to
be incomplete (see the description of #26012).
And to keep this, #26012 implement it in an inefficient way, just count
the issues by repo one by one, so it cannot work when `AllPublic` is
true because it's almost impossible to do this for all public repos.
1bfcdeef4c/modules/indexer/issues/indexer.go (L318-L338)
## Give up unnecessary features
We may can resovle `TODO: use "group by" of the indexer engines to
implement it`, I'm sure it can be done with Elasticsearch, but IIRC,
Bleve and Meilisearch don't support "group by".
And the real question is, does it worth it? Why should we need to know
the counts grouped by repos?
Let me show you my search dashboard on gitea.com.
<img width="1304" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/2bca2d46-6c71-4de1-94cb-0c9af27c62ff">
I never think the long repo list helps anything.
And if we agree to abandon it, things will be much easier. That is this
PR.
## TODO
I know it's important to filter by repos when searching issues. However,
it shouldn't be the way we have it now. It could be implemented like
this.
<img width="1316" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/99ee5f21-cbb5-4dfe-914d-cb796cb79fbe">
The indexers support it well now, but it requires some frontend work,
which I'm not good at. So, I think someone could help do that in another
PR and merge this one to fix the bug first.
Or please block this PR and help to complete it.
Finally, "Switch dashboard context" is also a design that needs
improvement. In my opinion, it can be accomplished by adding filtering
conditions instead of "switching".
This fixes a regression from #25859
If a tag has no Release, Gitea will show a Link to create a Release for
the Tag if the User has the Permission to do this, but the variable to
indicate that is no longer set.
Used here:
1bfcdeef4c/templates/repo/tag/list.tmpl (L39-L41)
Fix #25473
Although there was `m.Post("/login/oauth/access_token", CorsHandler()...`,
it never really worked, because it still lacks the "OPTIONS" handler.
Fixes #27819
We have support for two factor logins with the normal web login and with
basic auth. For basic auth the two factor check was implemented at three
different places and you need to know that this check is necessary. This
PR moves the check into the basic auth itself.
The steps to reproduce it.
First, create a new oauth2 source.
Then, a user login with this oauth2 source.
Disable the oauth2 source.
Visit users -> settings -> security, 500 will be displayed.
This is because this page only load active Oauth2 sources but not all
Oauth2 sources.
Fix nil access for inactive auth sources.
> Render failed, failed to render template:
user/settings/security/security, error: template error:
builtin(static):user/settings/security/accountlinks:32:20 : executing
"user/settings/security/accountlinks" at <$providerData.IconHTML>: nil
pointer evaluating oauth2.Provider.IconHTML
Code tries to access the auth source of an `ExternalLoginUser` but the
list contains only the active auth sources.
After many refactoring PRs for the "locale" and "template context
function", now the ".locale" is not needed for web templates any more.
This PR does a clean up for:
1. Remove `ctx.Data["locale"]` for web context.
2. Use `ctx.Locale` in `500.tmpl`, for consistency.
3. Add a test check for `500 page` locale usage.
4. Remove the `Str2html` and `DotEscape` from mail template context
data, they are copy&paste errors introduced by #19169 and #16200 . These
functions are template functions (provided by the common renderer), but
not template data variables.
5. Make email `SendAsync` function mockable (I was planning to add more
tests but it would make this PR much too complex, so the tests could be
done in another PR)
From issue https://github.com/go-gitea/gitea/issues/27314
When act_runner in `host` mode on Windows. `upload_artifact@v3` actions
use `path.join` to generate `itemPath` params when uploading artifact
chunk. `itemPath` is encoded as `${artifact_name}\${artifact_path}`.
<del>It's twice query escaped from ${artifact_name}/${artifact_path}
that joined by Windows slash \.</del>
**So we need convert Windows slash to linux**.
In https://github.com/go-gitea/gitea/issues/27314, runner shows logs
from `upload_artifact@v3` like with `%255C`:
```
[artifact-cases/test-artifact-cases] | ::error::Unexpected response. Unable to upload chunk to http://192.168.31.230:3000/api/actions_pipeline/_apis/pipelines/workflows/6/artifacts/34d628a422db9367c869d3fb36be81f5/upload?itemPath=more-files%255Css.json
```
But in gitea server at the same time, But shows `%5C`
```
2023/10/27 19:29:51 ...eb/routing/logger.go:102:func1() [I] router: completed PUT /api/actions_pipeline/_apis/pipelines/workflows/6/artifacts/34d628a422db9367c869d3fb36be81f5/upload?itemPath=more-files%5Css.json for 192.168.31.230:55340, 400 Bad Request in 17.6ms @ <autogenerated>:1(actions.artifactRoutes.uploadArtifact-fm)
```
I found `%255C` is escaped by
`https://github.com/actions/upload-artifact/blob/main/dist/index.js#L2329`.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Currently this feature is only available to admins, but there is no
clear reason why. If a user can actually merge pull requests, then this
seems fine as well.
This is useful in situations where direct pushes to the repository are
commonly done by developers.
---------
Co-authored-by: delvh <dev.lh@web.de>
Hello there,
Cargo Index over HTTP is now prefered over git for package updates: we
should not force users who do not need the GIT repo to have the repo
created/updated on each publish (it can still be created in the packages
settings).
The current behavior when publishing is to check if the repo exist and
create it on the fly if not, then update it's content.
Cargo HTTP Index does not rely on the repo itself so this will be
useless for everyone not using the git protocol for cargo registry.
This PR only disable the creation on the fly of the repo when publishing
a crate.
This is linked to #26844 (error 500 when trying to publish a crate if
user is missing write access to the repo) because it's now optional.
---------
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Fixes #27598
In #27080, the logic for the tokens endpoints were updated to allow
admins to create and view tokens in other accounts. However, the same
functionality was not added to the DELETE endpoint. This PR makes the
DELETE endpoint function the same as the other token endpoints and adds unit tests
Closes #27455
> The mechanism responsible for long-term authentication (the 'remember
me' cookie) uses a weak construction technique. It will hash the user's
hashed password and the rands value; it will then call the secure cookie
code, which will encrypt the user's name with the computed hash. If one
were able to dump the database, they could extract those two values to
rebuild that cookie and impersonate a user. That vulnerability exists
from the date the dump was obtained until a user changed their password.
>
> To fix this security issue, the cookie could be created and verified
using a different technique such as the one explained at
https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies.
The PR removes the now obsolete setting `COOKIE_USERNAME`.
1. Dropzone attachment removal, pretty simple replacement
2. Image diff: The previous code fetched every image twice, once via
`img[src]` and once via `$.ajax`. Now it's only fetched once and a
second time only when necessary. The image diff code was partially
rewritten.
---------
Co-authored-by: Giteabot <teabot@gitea.io>
storageHandler() is written as a middleware but is used as an endpoint
handler, and thus `next` is actually `nil`, which causes a null pointer
dereference when a request URL does not match the pattern (where it
calls `next.ServerHTTP()`).
Example CURL command to trigger the panic:
```
curl -I "http://yourhost/gitea//avatars/a"
```
Fixes #27409
---
Note: the diff looks big but it's actually a small change - all I did
was to remove the outer closure (and one level of indentation) ~and
removed the HTTP method and pattern checks as they seem redundant
because go-chi already does those checks~. You might want to check "Hide
whitespace" when reviewing it.
Alternative solution (a bit simpler): append `, misc.DummyOK` to the
route declarations that utilize `storageHandler()` - this makes it
return an empty response when the URL is invalid. I've tested this one
and it works too. Or maybe it would be better to return a 400 error in
that case (?)
This pull request is a minor code cleanup.
From the Go specification (https://go.dev/ref/spec#For_range):
> "1. For a nil slice, the number of iterations is 0."
> "3. If the map is nil, the number of iterations is 0."
`len` returns 0 if the slice or map is nil
(https://pkg.go.dev/builtin#len). Therefore, checking `len(v) > 0`
before a loop is unnecessary.
---
At the time of writing this pull request, there wasn't a lint rule that
catches these issues. The closest I could find is
https://staticcheck.dev/docs/checks/#S103
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This PR reduces the complexity of the system setting system.
It only needs one line to introduce a new option, and the option can be
used anywhere out-of-box.
It is still high-performant (and more performant) because the config
values are cached in the config system.
Part of #27065
This PR touches functions used in templates. As templates are not static
typed, errors are harder to find, but I hope I catch it all. I think
some tests from other persons do not hurt.