1
1
mirror of https://github.com/Byron/gitoxide synced 2025-10-06 01:52:40 +02:00

Compare commits

...

46 Commits

Author SHA1 Message Date
Sebastian Thiel
033ce8ef02 Merge pull request #2191 from GitoxideLabs/copilot/fix-b53588ea-1fea-485f-82a8-505a9101514e
Fix `gix credential fill` to accept protocol+host without URL field
2025-09-24 09:37:54 +02:00
Sebastian Thiel
2cc63ca95d refactor
- fmt
- clippy
2025-09-24 09:13:51 +02:00
copilot-swe-agent[bot]
030e040b68 fix: credential fill to allow protocol+host without URL
Co-authored-by: Byron <63622+Byron@users.noreply.github.com>
2025-09-24 09:13:19 +02:00
Sebastian Thiel
a80a150c75 fix: discover upwards to cwd
The upwards search for the repository directory takes a directory as
input and then walks through the parents. It turns out that it was
broken when the repository was the same as the working directory.

The code checked when the directory components had been stripped to "", in
case the directory was replaced with `cwd.parent()`, so the loop missed
to check `cwd` itself. If the input directory was "./something", then
"." was checked which then succeeded.
2025-09-22 07:40:36 +02:00
Sebastian Thiel
b3b3203853 refactor 2025-09-22 07:09:58 +02:00
Sebastian Thiel
4b535bf195 Merge pull request #2181 from GitoxideLabs/report
Add monthly report for September 2025
2025-09-22 06:38:15 +02:00
Eliah Kagan
9e254c88af Merge pull request #2187 from EliahKagan/run-ci/no-persist-credentials
Don't persist GitHub authentication token in `.git/config` on CI
2025-09-21 20:29:25 -04:00
Eliah Kagan
741cb6b355 Refine check-no-persist-credentials CI job
This also tests the job by manually trying out several ways it
should fail to make sure it does, but I squashed those out. The can
be seen at EliahKagan#105 and are summarized as follows:

* Test that we always have `actions/checkout` not persist credentials

* Check that we catch `actions/checkout` with no `with`

* Improve `check-no-persist-credentials` output and maintainability

* Check that we catch checkout `with` without `persist-credentials`

* Check that we catch `persist-credentials` not set to boolean false

* Having tested the new check, restore `persist-credentials: false`
2025-09-21 20:04:22 -04:00
Eliah Kagan
a235ac8035 Use actions/checkout with persist-credentials: false
When `actions/checkout` is used to check out the repository on CI,
it persists credentials related to the GitHub token in the local
scope configuration at `.git/config`, unless `persist-credentials`
is explicitly set to `false`. This facilitates subsequent remote
operations on the repository that could otherwise fail, but we have
no such operations in any of our workflows.

As an added layer of protection to keep these credentials from
leaking into logs (or otherwise being displayed or subject to
exfiltration) in case there is ever unintended coupling between the
operation of the test suite (or any step subsequent to checkout
that is used to prepare or run tests or other checks) and the
cloned `gitoxide` repository itself, this:

- Adds `persist-credentials: false` in a `with` mapping on every
  step that uses `actions/checkout`.

- Adds a new CI job that checks that every `actions/checkout` step
  in any job in any workflow sets `persist-credentials` to `false`.

In addition to usual testing on CI, the `release.yml` workflow is
among the workflows changed here, and it has also been tested:
https://github.com/EliahKagan/gitoxide/actions/runs/17899238656

See also:

- https://github.com/actions/checkout/blob/main/README.md
  (Covers what happens with/without `persist-credentials: false`).

- https://github.com/actions/checkout/issues/485
2025-09-21 20:04:22 -04:00
Eliah Kagan
f8be65fef6 Merge pull request #2186 from EliahKagan/small
Note that the `small` feature doesn't include `clone`
2025-09-21 19:52:38 -04:00
Eliah Kagan
6ab07f7728 doc: Note that the small feature doesn't include clone
The `small` feature of the `gitoxide` crate does not directly or
indirectly include the `gitoxide-core-blocking-client` feature, and
therefore does not provide `gix clone`. Although it was documented
to provide only local operations, this was ambiguous: in the sense
of local and remote repository operations, cloning is arguably a
remote operation even when one clones from a filesystem path, or
file URL. But in the broader meaning of "local," this could mean
merely that network transport is omitted and that local cloning is
included.

This adds a short explicit note that `small` does not include
`gix clone`. This is a minimal fix for #2185 and it may make sense
to improve the description further (unless `small` is to change).
2025-09-21 19:28:09 -04:00
Fredrik Medley
504afece05 Fix discover::upwards when working dir is the repo
The upwards search for the repository directory takes a directory as
input and then walks through the parents. It turns out that it was
broken when the repository was the same as the working directory.

The code checked when the directory components had been stripped to "", in
case the directory was replaced with `cwd.parent()`, so the loop missed
to check `cwd` itself. If the input directory was "./something", then
"." was checked which then succeeded.
2025-09-22 00:43:30 +02:00
Eliah Kagan
f7c7145df0 Merge pull request #2183 from EliahKagan/macos-intel
Build Intel Mac releases on Intel runner to fix missing `openssl` for target
2025-09-21 16:00:19 -04:00
Eliah Kagan
f56ca27244 Build x86_64-apple-darwin on macos-15-intel
This changes the runner in the `release.yml` workflow for the job
that builds the `x86_64-apple-darwin` target from `macos-latest`
(which currently aliases the `macos-15` runner, an Apple Silicon
system) to the recently introduced `macos-15-intel` runner. This is
to fix recent build failures for the `lean` and `max` feature jobs
with that target, which happen when the `openssl-sys` dependency
attempts to find the installed `openssl` library for the target
architecture.

The new failures can be seen in these runs:

- https://github.com/EliahKagan/gitoxide/actions/runs/17895976664/job/50882669073
- https://github.com/EliahKagan/gitoxide/actions/runs/17895976664/job/50882669084

I'm not sure why the failures happened, since they did not occur in
the `macos-15` experiments with `release.yml` done as part of #2078.

On the new `macos-15-intel` runner, see:
https://github.blog/changelog/2025-09-19-github-actions-macos-13-runner-image-is-closing-down/

To verify that the changes here fix the failures, see:

- https://github.com/EliahKagan/gitoxide/actions/runs/17897891645/job/50886931822
- https://github.com/EliahKagan/gitoxide/actions/runs/17897891645/job/50886931809
2025-09-21 15:40:25 -04:00
Sebastian Thiel
3148010f8a Add monthly report for September 2025 2025-09-21 13:20:28 +02:00
Eliah Kagan
d976848e7e Merge pull request #2180 from EliahKagan/copy-royal-guidance
Strengthen and adjust `copy-royal` usage guidance and caveats
2025-09-21 05:11:25 -04:00
Eliah Kagan
90a3dcbddc doc: Strengthen and adjust copy-royal usage guidance and caveats
The `copy-royal` algorithm maintains the patterns and "shape" of
text sufficiently to keep diffs the same (in the vast majority of
cases). It is used in `internal-tools` to help prepare test cases
with what is important and relevant to a regression test of diff
behavior, rather than the exact original repository content in a
tree that has been found to trigger a bug. It avoids needless
verbatim reproduction, while preserving aspects that are useful and
necessary for testing. It keeps the focus on patterns, preventing
irrelevant details of code in a tree that triggered a bug from
being confused with the logic of gitoxide itself, and makes it less
likely to be touched inadvertently in efforts to fix bugs or
improve style (which, in test data, would cause subtle breakage).

Although these benefits are substantial and we intend to continue
using copy-royal in the preparation of test cases as needed if or
when regressions arise, some of the guidance and rationale we had
given for its use was inaccurate or misleading. Most importantly,
copy-royal cannot be used in practice to redact sensitive
information: if you have a repository whose contents should not be
made public, then it is not safe to share the output of copy-royal
run on that repository either.

Copy-royal is implemented (roughly speaking) by mapping alphabetic
characters down to ten letters. This removes some information, at
least in principle: that is, if it were given totally random
letters as input, then it would be impossible to reverse it to get
those letters back. Even on input that is much more structured and
predictable, such as real-world input, it obfuscates it, making it
look garbled and nonsensical. However, even when one intuitively
feels that it has destroyed information, it is possible to reverse
it in many cases, and possibly even in all practical cases.

The reason is that, in real world source code and natural language,
some sequences of letters are overwhelmingly more likely to occur
than others, both in general and (especially) contextually given
what surrounding text is present. The information that is removed
by mapping into ten letters could often be reconstructed by:

1. Building a grammar of possible inputs, which can be done in a
   simple manner by translating the copy-royal output one wishes to
   reverse into a regular expression in which every symbol in the
   copy-royal output becomes a character class of characters that
   map to it. In effect, for every output of the copy-royal
   algorithm, there is a regex that matches the possible inputs.

2. Predicting, stepwise, what code or text is likely to have arisen
   that matches that grammar. In principle this could be done with
   a variety of techniques or even manually. But one fruitful
   approach would be to use an autoregressive large language model,
   and apply constrained decoding[1] to sample only logits
   consistent with the regex. Small experiments carried out so far
   suggest[2] this to be a workable technique when combined with
   beam search[3]. (This technique does not require the specific
   text or code being reconstructed to have existed when the model
   was trained.)

Accordingly, this modifies the documentation of copy-royal to avoid
claiming that the input of copy-royal cannot be recovered, or
anything that recommends or may appear to recommend the use of
copy-royal to redact sensitive information. It also clarifies and
adjusts the explanation of when it makes sense to use copy-royal,
and describes some of its benefits that do not rely on the
assumption that it is infeasible (or even difficult) to reverse.

In the comment documenting `BlameCopyRoyal`, which is among those
edited in the above ways, this also edits its top line to make
clear more generally how `BlameCopyRoyal` relates to `git blame`.

[1]: https://github.com/Saibo-creator/Awesome-LLM-Constrained-Decoding
[2]: See link(s) in https://github.com/GitoxideLabs/gitoxide/pull/2180
[3]: https://en.wikipedia.org/wiki/Beam_search

Co-authored-by: Sebastian Thiel <sebastian.thiel@icloud.com>
2025-09-21 02:34:49 -04:00
Sebastian Thiel
442f800026 Merge pull request #2174 from cruessler/deprecate-in-place-methods
feat: replace `Reference::peel_to_id_in_place_packed`
2025-09-18 04:41:33 +02:00
Christoph Rüßler
03804965f5 feat: replace Reference::peel_to_id_in_place_packed
Also, update documentation where it was still referring to deprecated
`in_place` methods to refer to the new methods instead.
2025-09-17 10:15:39 +02:00
Sebastian Thiel
51f998f60a Merge pull request #2173 from metlos/remote-with-url
feat: ability to change the fetch url of a remote
2025-09-17 05:17:28 +02:00
Sebastian Thiel
620d275bea fix: deprecate Remote::push_url*() in favor of Remote::with_push*(). 2025-09-17 04:58:33 +02:00
Lukas Krejci
1d4a7f534d feat: ability to change the fetch url of a remote
The remote has a couple of "builder" methods to change
is fields, e.g. `push_url` for setting the push url.

A builder method for changing the fetch url of a remote
was missing. This makes it impossible to fully replicate 
the functionality of `git remote set-url`.
2025-09-15 21:35:25 +02:00
Sebastian Thiel
81c0c1612d Merge pull request #2171 from cruessler/deprecate-in-place-methods-on-head
feat: replace `peel_to_x_in_place` by `peel_to_x` in `gix::types::Head`
2025-09-15 05:44:34 +02:00
Sebastian Thiel
d302fa20df Merge pull request #2170 from GitoxideLabs/copilot/fix-7a3e1d32-c145-43e2-8e87-319b255dbc2f
Implement WriteTo trait for gix::Blob
2025-09-15 05:40:03 +02:00
Christoph Rüßler
1536fd8651 Adapt to changes in gix 2025-09-15 05:25:57 +02:00
Christoph Rüßler
b6fbf05392 feat: replace Head::(try_)peel_to_x_in_place with Head::peel_to_x.
The `_in_place()` suffixed methods are now deprecated.
2025-09-15 05:24:44 +02:00
Sebastian Thiel
3499050a97 refactor 2025-09-15 05:20:19 +02:00
copilot-swe-agent[bot]
ca9988284c Implement WriteTo trait for gix::Blob with comprehensive tests
Co-authored-by: Byron <63622+Byron@users.noreply.github.com>
2025-09-15 05:20:19 +02:00
Sebastian Thiel
e60e253831 Merge pull request #2169 from GitoxideLabs/copilot/fix-54bce74a-dc5e-4361-b53b-326c16b34046
Remove special handling of empty blob hash in gix - treat as error if not found
2025-09-14 08:47:51 +02:00
Sebastian Thiel
b9c7a7e076 refactor 2025-09-14 08:30:04 +02:00
copilot-swe-agent[bot]
e087960347 fix: remove special handling for empty blob hash to match Git behaviour
This feature was recently introduced, but was never released.

Co-authored-by: Byron <63622+Byron@users.noreply.github.com>
2025-09-14 08:20:36 +02:00
Sebastian Thiel
f891c37edb Merge pull request #2168 from GitoxideLabs/copilot/fix-01a02b99-91ef-4e27-b90f-19af7d0d252c
Add `commit_raw` and `commit_as_raw` methods for creating commits without reference updates
2025-09-13 21:27:06 +02:00
Sebastian Thiel
d4c2542c2a refactor 2025-09-13 21:09:49 +02:00
Sebastian Thiel
3c0b7374f1 feat: Add Repository::new_commit and Repository::new_commit_as() methods.
Co-authored-by: Byron <63622+Byron@users.noreply.github.com>
2025-09-13 21:09:14 +02:00
Sebastian Thiel
1a4c84dd96 Merge pull request #2167 from GitoxideLabs/copilot/fix-3952f55e-8faf-4737-886f-09e74cab4ca8
Make `gix::Repository::has_object()` consider empty blob ids to be always present
2025-09-13 19:31:14 +02:00
Sebastian Thiel
b4812d9d95 Merge pull request #2166 from GitoxideLabs/copilot/fix-ffb06a30-3f53-4f82-94ed-51935d63da63
Add `is_empty_tree()` and `is_empty_blob()` methods to `gix_hash::Oid`
2025-09-13 19:20:23 +02:00
Sebastian Thiel
689d839230 refactor 2025-09-13 19:14:44 +02:00
Sebastian Thiel
772c434483 Merge pull request #2165 from GitoxideLabs/copilot/fix-ceb7b1a4-93f5-4da1-a4f2-4290c8f594eb
Add `Kind::empty_blob()` and `Kind::empty_tree()` methods to gix-hash
2025-09-13 19:08:16 +02:00
copilot-swe-agent[bot]
2fc9dbe7e5 fix: empty blob hashes are now automatically considered present.
Co-authored-by: Byron <63622+Byron@users.noreply.github.com>
2025-09-13 19:02:55 +02:00
Sebastian Thiel
58e2d18cd0 refactor 2025-09-13 18:58:58 +02:00
copilot-swe-agent[bot]
0f20e0a542 feat: Add oid::is_empty_tree() and oid::is_empty_blob()`. 2025-09-13 18:58:50 +02:00
Sebastian Thiel
41c40adf3d refactor 2025-09-13 18:51:11 +02:00
copilot-swe-agent[bot]
7fca001b13 feat: Add Kind::empty_blob() and Kind::empty_tree()`` methods. 2025-09-13 18:49:15 +02:00
Sebastian Thiel
42f8db5bc9 Merge pull request #2163 from cruessler/deprecate-in-place-methods
feat: replace peel_to_x_in_place by peel_to_x
2025-09-13 07:31:19 +02:00
Christoph Rüßler
44922d0a71 Adapt to changes in gix-ref 2025-09-12 11:31:36 +02:00
Christoph Rüßler
e179f825f7 feat: replace peel_to_x_in_place by peel_to_x 2025-09-12 11:31:36 +02:00
53 changed files with 982 additions and 130 deletions

View File

@@ -38,6 +38,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: extractions/setup-just@v3
- name: Read the MSRV
run: |
@@ -60,6 +62,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: extractions/setup-just@v3
- name: Ensure we start out clean
run: git diff --exit-code
@@ -75,6 +79,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- name: Prerequisites
run: |
prerequisites=(
@@ -177,6 +183,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Setup dependencies
@@ -197,6 +205,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- uses: extractions/setup-just@v3
@@ -221,6 +231,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: cargo check default features
@@ -268,6 +280,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- uses: taiki-e/install-action@v2
@@ -339,6 +353,8 @@ jobs:
apt-get install --no-install-recommends -y -- "${prerequisites[@]}"
shell: bash # This step needs `bash`, and the default in container jobs is `sh`.
- uses: actions/checkout@v5
with:
persist-credentials: false
- name: Install Rust via Rustup
run: |
# Specify toolchain to avoid possible misdetection based on the 64-bit running kernel.
@@ -365,6 +381,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@stable
with:
targets: ${{ env.TARGET }}
@@ -382,6 +400,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@master
with:
toolchain: stable
@@ -412,6 +432,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: EmbarkStudios/cargo-deny-action@v2
with:
command: check advisories
@@ -422,6 +444,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: EmbarkStudios/cargo-deny-action@v2
with:
command: check bans licenses sources
@@ -441,6 +465,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- name: Install Rust
run: |
rustup update stable
@@ -520,6 +546,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- name: Check that working tree is initially clean
run: |
set -x
@@ -533,6 +561,33 @@ jobs:
git status
git diff --exit-code
# Check that all `actions/checkout` in CI jobs have `persist-credentials: false`.
check-no-persist-credentials:
runs-on: ubuntu-latest
env:
GLOB: .github/workflows/*.@(yaml|yml)
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
sparse-checkout: '.github/workflows'
- name: List workflows to be scanned
run: |
shopt -s extglob
printf '%s\n' ${{ env.GLOB }}
- name: Scan workflows
run: |
shopt -s extglob
yq '.jobs.*.steps[]
| select(.uses == "actions/checkout@*" and .with.["persist-credentials"]? != false)
| {"file": filename, "line": line, "name": (.name // .uses)}
| .file + ":" + (.line | tostring) + ": " + .name
' -- ${{ env.GLOB }} >query-output.txt
cat query-output.txt
test -z "$(<query-output.txt)" # Report failure if we found anything.
# Check that only jobs intended not to block PR auto-merge are omitted as
# dependencies of the `tests-pass` job below, so that whenever a job is
# added, a decision is made about whether it must pass for PRs to merge.
@@ -557,6 +612,7 @@ jobs:
echo "WORKFLOW_PATH=${relative_workflow_with_ref%@*}" >> "$GITHUB_ENV"
- uses: actions/checkout@v5
with:
persist-credentials: false
sparse-checkout: ${{ env.WORKFLOW_PATH }}
- name: Get all jobs
run: yq '.jobs | keys.[]' -- "$WORKFLOW_PATH" | sort | tee all-jobs.txt
@@ -586,6 +642,7 @@ jobs:
- lint
- cargo-deny
- check-packetline
- check-no-persist-credentials
- check-blocking
if: always() # Always run even if dependencies fail.

View File

@@ -14,6 +14,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: Swatinem/rust-cache@v2
- name: stress
run: make stress

View File

@@ -41,6 +41,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Get the release version from the tag
if: env.VERSION == ''
@@ -132,7 +134,7 @@ jobs:
- target: s390x-unknown-linux-gnu
os: ubuntu-latest
- target: x86_64-apple-darwin
os: macos-latest
os: macos-15-intel
- target: aarch64-apple-darwin
os: macos-latest
- target: x86_64-pc-windows-msvc
@@ -234,6 +236,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Install packages (Ubuntu)
# Because openssl doesn't work on musl by default, we resort to max-pure.
@@ -537,6 +541,8 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:

View File

@@ -58,7 +58,7 @@ lean = ["fast", "tracing", "pretty-cli", "http-client-curl", "gitoxide-core-tool
## The smallest possible build, best suitable for small single-core machines.
##
## This build is essentially limited to local operations without any fanciness.
## This build is essentially limited to local operations without any fanciness. It does not have `gix clone`.
##
## Optimized for size, no parallelism thus much slower, progress line rendering.
small = ["pretty-cli", "prodash-render-line", "is-terminal"]

53
etc/reports/25-09.md Normal file
View File

@@ -0,0 +1,53 @@
By this time I think it's fair to say that I am mostly busy with GitButler, and `gitoxide` doesn't receive more than basic maintenance and the occasional fix.
Fortunately, this isn't at all bad news as GitButler uses `gitoxide` and will only use more of it - it's just that for now, getting what's there stable and safe
is more important.
## GitButler - Apply & Unapply
These are fundamental operations for GitButler as they add and remove a branch from the GitButler workspace. After the core-engine was rewritten, this operation is more flexible than before and that, in turn, costs time to fully test and implement. And of course, all that has to happen in a way that is compatible with all the 'old' code that is still doing a lot of the heavy lifting, so workspaces have to remain compatible for now.
In the process of dealing with this, a new primitive was created: 'safe-checkout'. As such, it's a checkout that is very similar to what Git does when checking things out, except that it's willing to merge locally changed files as long as there is no conflict. The actual checkout is still performed by `git2`, but that's quite slow and lacks compatibility with Git to the point where `git-lfs` is not supported.
Thus, a way to *reset the working tree* to a known state *after* the caller did their own due-diligence is a feature that is planned next to make 'safe-checkout' faster and better. For `gitoxide` this doesn't even mean too much as all the primitives to do this safely already exist - they merely need to be recombined. If everything goes well, this would mean that rather large checkouts that cost 16s with `git2` and 11s with Git could take as little as 2.2s with `gitoxide` with as little as 3 threads.
And that, in turn, would make real difference for developer tooling.
## Community
### `gix::Repository::blame_file()`
Thanks to [Christoph Rüßler](https://github.com/cruessler) `gitoxide` had a very competitive `gix blame <file>` implementation for a while already, and one that was available on the command-line as well.
Now, however, he also added it to the `gix::Repository`, making it available other tools with ease - previously they would have to use the plumbing in the `gix-blame` crate directly.
It's notable that biggest shortcoming is still "The Slider Problem" stemming from us not using the same `xdiff` implementation that Git and `git2` use, but… it's already on how list of problems to tackle. Stay tuned.
### Zlib-rs is now the default
By the hands of no other than [Folkert de Vries](https://github.com/folkertdev) `gitoxide` could completely shed its dependency to `flate2` along with all the feature flags to select from its various backends, and replace it with direct access to the `zlib-rs` C-API to eliminate that abstraction layer entirely.
And as it's planned to make `zlib-rs` expose a pure-Rust API as well, I am sure `gitoxide` is on the list of projects to try it out with as well.
### Better Git installation detection on Windows
On Windows, and thanks to the unmatched attention to detail of [Eliah Kagan](https://github.com/EliahKagan), `gitoxide` will now be able to detect Git installations even if these are per user.
And having that is still quite crucial for tooling to do the best possible job, just because many hooks rely on tooling that is only available in the Git-provided Bash shell. After all, Git ships a small Linux environment along with it nowadays.
### The advent of AI Part 2
> `gitoxide` is feeling the impact of AI tooling, such as agents and IDE integrations that allow for the generation of copious amounts of code.
> The adequate response is still unclear to me, and I have tried auto-reviewing with Copilot and even proactively launching Copilot to attempt resolving entire issues itself (-> it doesn't work).
This was written last month, and this month I may add that Copilot is a hammer I find useful. It's made for people like me who don't like to *actually* work with AI and let it write code as it turns "thinking + writing" into mostly "reading + trying to keep up". What it allows to do is to one-shot a solution or *a basis for a solution* with minimal effort. And even though often this produces garbage, I have celebrated many successes with it as well leading to fixes and improvements that wouldn't (yet) exist without it.
It's viable, and it's valuable, and I am curious if it gets more powerful or if it will remain in its niche.
### Gix in Cargo
There is good things on the horizon. With GitButler slated to have its checkout driven by a tailor-made implementation of 'reset' in `gitoxide`, this coincidentally is exactly what Cargo would need to also greatly speed up its checkouts and make them more compatible, too. We are talking proper Git filter support, and speedups in the realms of ~7x (see the `GitButler` paragraph for details).
Cheers
Sebastian
PS: The latest timesheets can be found [here (2025)](https://github.com/Byron/byron/blob/main/timesheets/2025.csv).

View File

@@ -15,9 +15,14 @@ pub fn function(repo: gix::Repository, action: gix::credentials::program::main::
std::io::stdin(),
std::io::stdout(),
|action, context| -> Result<_, Error> {
let (mut cascade, _action, prompt_options) = repo.config_snapshot().credential_helpers(gix::url::parse(
context.url.as_ref().expect("framework assures URL is present").as_ref(),
)?)?;
let url = context
.url
.clone()
.or_else(|| context.to_url())
.ok_or(Error::Protocol(gix::credentials::protocol::Error::UrlMissing))?;
let (mut cascade, _action, prompt_options) = repo
.config_snapshot()
.credential_helpers(gix::url::parse(url.as_ref())?)?;
cascade
.invoke(
match action {

View File

@@ -12,7 +12,7 @@ pub fn log(mut repo: gix::Repository, out: &mut dyn std::io::Write, path: Option
}
fn log_all(repo: gix::Repository, out: &mut dyn std::io::Write) -> Result<(), anyhow::Error> {
let head = repo.head()?.peel_to_commit_in_place()?;
let head = repo.head()?.peel_to_commit()?;
let topo = gix::traverse::commit::topo::Builder::from_iters(&repo.objects, [head.id], None::<Vec<gix::ObjectId>>)
.build()?;

View File

@@ -166,10 +166,10 @@ impl Fixture {
let mut reference = gix_ref::file::Store::find(&store, "HEAD")?;
// Needed for `peel_to_id_in_place`.
// Needed for `peel_to_id`.
use gix_ref::file::ReferenceExt;
let head_id = reference.peel_to_id_in_place(&store, &odb)?;
let head_id = reference.peel_to_id(&store, &odb)?;
let git_dir = worktree_path.join(".git");
let index = gix_index::File::at(git_dir.join("index"), gix_hash::Kind::Sha1, false, Default::default())?;

View File

@@ -55,7 +55,7 @@ pub enum Error {
Context(#[from] crate::protocol::context::decode::Error),
#[error("Credentials for {url:?} could not be obtained")]
CredentialsMissing { url: BString },
#[error("The url wasn't provided in input - the git credentials protocol mandates this")]
#[error("Either 'url' field or both 'protocol' and 'host' fields must be provided")]
UrlMissing,
}
@@ -89,15 +89,19 @@ pub(crate) mod function {
let mut buf = Vec::<u8>::with_capacity(512);
stdin.read_to_end(&mut buf)?;
let ctx = Context::from_bytes(&buf)?;
if ctx.url.is_none() {
if ctx.url.is_none() && (ctx.protocol.is_none() || ctx.host.is_none()) {
return Err(Error::UrlMissing);
}
let res = credentials(action, ctx).map_err(|err| Error::Helper { source: Box::new(err) })?;
let res = credentials(action, ctx.clone()).map_err(|err| Error::Helper { source: Box::new(err) })?;
match (action, res) {
(Action::Get, None) => {
return Err(Error::CredentialsMissing {
url: Context::from_bytes(&buf)?.url.expect("present and checked above"),
})
let ctx_for_error = ctx;
let url = ctx_for_error
.url
.clone()
.or_else(|| ctx_for_error.to_url())
.expect("URL is available either directly or via protocol+host which we checked for");
return Err(Error::CredentialsMissing { url });
}
(Action::Get, Some(ctx)) => ctx.write_to(stdout)?,
(Action::Erase | Action::Store, None) => {}

View File

@@ -92,7 +92,11 @@ mod mutate {
/// normally this isn't the case.
#[allow(clippy::result_large_err)]
pub fn destructure_url_in_place(&mut self, use_http_path: bool) -> Result<&mut Self, protocol::Error> {
let url = gix_url::parse(self.url.as_ref().ok_or(protocol::Error::UrlMissing)?.as_ref())?;
if self.url.is_none() {
self.url = Some(self.to_url().ok_or(protocol::Error::UrlMissing)?);
}
let url = gix_url::parse(self.url.as_ref().expect("URL is present after check above").as_ref())?;
self.protocol = Some(url.scheme.as_str().into());
self.username = url.user().map(ToOwned::to_owned);
self.password = url.password().map(ToOwned::to_owned);

View File

@@ -20,7 +20,7 @@ pub type Result = std::result::Result<Option<Outcome>, Error>;
pub enum Error {
#[error(transparent)]
UrlParse(#[from] gix_url::parse::Error),
#[error("The 'url' field must be set when performing a 'get/fill' action")]
#[error("Either 'url' field or both 'protocol' and 'host' fields must be provided")]
UrlMissing,
#[error(transparent)]
ContextDecode(#[from] context::decode::Error),

View File

@@ -0,0 +1,94 @@
use gix_credentials::program::main;
use std::io::Cursor;
#[derive(Debug, thiserror::Error)]
#[error("Test error")]
struct TestError;
#[test]
fn protocol_and_host_without_url_is_valid() {
let input = b"protocol=https\nhost=github.com\n";
let mut output = Vec::new();
let mut called = false;
let result = main(
["get".into()],
Cursor::new(input),
&mut output,
|_action, context| -> Result<Option<gix_credentials::protocol::Context>, TestError> {
assert_eq!(context.protocol.as_deref(), Some("https"));
assert_eq!(context.host.as_deref(), Some("github.com"));
assert_eq!(context.url, None, "the URL isn't automatically populated");
called = true;
Ok(None)
},
);
// This should fail because our mock helper returned None (no credentials found)
// but it should NOT fail because of missing URL
match result {
Err(gix_credentials::program::main::Error::CredentialsMissing { .. }) => {
assert!(
called,
"The helper gets called, but as nothing is provided in the function it ulimately fails"
);
}
other => panic!("Expected CredentialsMissing error, got: {other:?}"),
}
}
#[test]
fn missing_protocol_with_only_host_or_protocol_fails() {
for input in ["host=github.com\n", "protocol=https\n"] {
let mut output = Vec::new();
let mut called = false;
let result = main(
["get".into()],
Cursor::new(input),
&mut output,
|_action, _context| -> Result<Option<gix_credentials::protocol::Context>, TestError> {
called = true;
Ok(None)
},
);
match result {
Err(gix_credentials::program::main::Error::UrlMissing) => {
assert!(!called, "the context is lacking, hence nothing gets called");
}
other => panic!("Expected UrlMissing error, got: {other:?}"),
}
}
}
#[test]
fn url_alone_is_valid() {
let input = b"url=https://github.com\n";
let mut output = Vec::new();
let mut called = false;
let result = main(
["get".into()],
Cursor::new(input),
&mut output,
|_action, context| -> Result<Option<gix_credentials::protocol::Context>, TestError> {
called = true;
assert_eq!(context.url.unwrap(), "https://github.com");
assert_eq!(context.host, None, "not auto-populated");
assert_eq!(context.protocol, None, "not auto-populated");
Ok(None)
},
);
// This should fail because our mock helper returned None (no credentials found)
// but it should NOT fail because of missing URL
match result {
Err(gix_credentials::program::main::Error::CredentialsMissing { .. }) => {
assert!(called);
}
other => panic!("Expected CredentialsMissing error, got: {other:?}"),
}
}

View File

@@ -1 +1,2 @@
mod from_custom_definition;
mod main;

View File

@@ -72,6 +72,48 @@ mod destructure_url_in_place {
true,
);
}
#[test]
fn protocol_and_host_with_path_without_url_constructs_full_url() {
let mut ctx = Context {
protocol: Some("https".into()),
host: Some("github.com".into()),
path: Some("org/repo".into()),
username: Some("user".into()),
password: Some("pass-to-be-ignored".into()),
..Default::default()
};
ctx.destructure_url_in_place(false)
.expect("should work with protocol, host and path");
assert_eq!(
ctx.url.unwrap(),
"https://user@github.com/org/repo",
"URL should be constructed from all provided fields, except password"
);
// Original fields should be preserved
assert_eq!(ctx.protocol.as_deref(), Some("https"));
assert_eq!(ctx.host.as_deref(), Some("github.com"));
assert_eq!(ctx.path.unwrap(), "org/repo");
}
#[test]
fn missing_protocol_or_host_without_url_fails() {
let mut ctx_no_protocol = Context {
host: Some("github.com".into()),
..Default::default()
};
assert_eq!(
ctx_no_protocol.destructure_url_in_place(false).unwrap_err().to_string(),
"Either 'url' field or both 'protocol' and 'host' fields must be provided"
);
let mut ctx_no_host = Context {
protocol: Some("https".into()),
..Default::default()
};
assert!(ctx_no_host.destructure_url_in_place(false).is_err());
}
}
mod to_prompt {

View File

@@ -168,7 +168,7 @@ pub(crate) mod function {
}
}
}
if cursor.parent().is_some_and(|p| p.as_os_str().is_empty()) {
if cursor.as_os_str().is_empty() || cursor.as_os_str() == OsStr::new(".") {
cursor = cwd.to_path_buf();
dir_made_absolute = true;
}

View File

@@ -3,6 +3,23 @@ use std::path::{Path, PathBuf};
use gix_discover::upwards::Options;
use serial_test::serial;
#[test]
#[serial]
fn in_cwd_upwards_from_nested_dir() -> gix_testtools::Result {
let repo = gix_testtools::scripted_fixture_read_only("make_basic_repo.sh")?;
let _keep = gix_testtools::set_current_dir(repo)?;
for dir in ["subdir", "some/very/deeply/nested/subdir"] {
let (repo_path, _trust) = gix_discover::upwards(Path::new(dir))?;
assert_eq!(
repo_path.kind(),
gix_discover::repository::Kind::WorkTree { linked_git_dir: None },
);
assert_eq!(repo_path.as_ref(), Path::new("."), "{dir}");
}
Ok(())
}
#[test]
#[serial]
fn upwards_bare_repo_with_index() -> gix_testtools::Result {
@@ -48,7 +65,7 @@ fn in_cwd_upwards_nonbare_repo_without_index() -> gix_testtools::Result {
fn upwards_with_relative_directories_and_optional_ceiling() -> gix_testtools::Result {
let repo = gix_testtools::scripted_fixture_read_only("make_basic_repo.sh")?;
let _keep = gix_testtools::set_current_dir(repo.join("subdir"))?;
let _keep = gix_testtools::set_current_dir(repo.join("some"))?;
let cwd = std::env::current_dir()?;
for (search_dir, ceiling_dir_component) in [
@@ -56,10 +73,13 @@ fn upwards_with_relative_directories_and_optional_ceiling() -> gix_testtools::Re
(".", "./.."),
("./.", "./.."),
(".", "./does-not-exist/../.."),
("./././very/deeply/nested/subdir", ".."),
("very/deeply/nested/subdir", ".."),
] {
let search_dir = Path::new(search_dir);
let ceiling_dir = cwd.join(ceiling_dir_component);
let (repo_path, _trust) = gix_discover::upwards_opts(
search_dir.as_ref(),
search_dir,
Options {
ceiling_dirs: vec![ceiling_dir],
..Default::default()
@@ -68,12 +88,12 @@ fn upwards_with_relative_directories_and_optional_ceiling() -> gix_testtools::Re
.expect("ceiling dir should allow us to discover the repo");
assert_repo_is_current_workdir(repo_path, Path::new(".."));
let (repo_path, _trust) = gix_discover::upwards_opts(search_dir.as_ref(), Default::default())
.expect("without ceiling dir we see the same");
let (repo_path, _trust) =
gix_discover::upwards_opts(search_dir, Default::default()).expect("without ceiling dir we see the same");
assert_repo_is_current_workdir(repo_path, Path::new(".."));
let (repo_path, _trust) = gix_discover::upwards_opts(
search_dir.as_ref(),
search_dir,
Options {
ceiling_dirs: vec![PathBuf::from("..")],
..Default::default()
@@ -83,7 +103,7 @@ fn upwards_with_relative_directories_and_optional_ceiling() -> gix_testtools::Re
assert_repo_is_current_workdir(repo_path, Path::new(".."));
let err = gix_discover::upwards_opts(
search_dir.as_ref(),
search_dir,
Options {
ceiling_dirs: vec![PathBuf::from(".")],
..Default::default()
@@ -91,7 +111,14 @@ fn upwards_with_relative_directories_and_optional_ceiling() -> gix_testtools::Re
)
.unwrap_err();
assert!(matches!(err, gix_discover::upwards::Error::NoMatchingCeilingDir));
if search_dir.parent() == Some(".".as_ref()) || search_dir.parent() == Some("".as_ref()) {
assert!(matches!(err, gix_discover::upwards::Error::NoMatchingCeilingDir));
} else {
assert!(matches!(
err,
gix_discover::upwards::Error::NoGitRepositoryWithinCeiling { .. }
));
}
}
Ok(())

View File

@@ -113,4 +113,16 @@ impl Kind {
Kind::Sha1 => ObjectId::null_sha1(),
}
}
/// The hash of an empty blob.
#[inline]
pub const fn empty_blob(&self) -> ObjectId {
ObjectId::empty_blob(*self)
}
/// The hash of an empty tree.
#[inline]
pub const fn empty_tree(&self) -> ObjectId {
ObjectId::empty_tree(*self)
}
}

View File

@@ -148,6 +148,22 @@ impl oid {
Kind::Sha1 => &self.bytes == oid::null_sha1().as_bytes(),
}
}
/// Returns `true` if this hash is equal to an empty blob.
#[inline]
pub fn is_empty_blob(&self) -> bool {
match self.kind() {
Kind::Sha1 => &self.bytes == oid::empty_blob_sha1().as_bytes(),
}
}
/// Returns `true` if this hash is equal to an empty tree.
#[inline]
pub fn is_empty_tree(&self) -> bool {
match self.kind() {
Kind::Sha1 => &self.bytes == oid::empty_tree_sha1().as_bytes(),
}
}
}
/// Sha1 specific methods
@@ -175,6 +191,18 @@ impl oid {
pub(crate) fn null_sha1() -> &'static Self {
oid::from_bytes([0u8; SIZE_OF_SHA1_DIGEST].as_ref())
}
/// Returns an oid representing the hash of an empty blob.
#[inline]
pub(crate) fn empty_blob_sha1() -> &'static Self {
oid::from_bytes(b"\xe6\x9d\xe2\x9b\xb2\xd1\xd6\x43\x4b\x8b\x29\xae\x77\x5a\xd8\xc2\xe4\x8c\x53\x91")
}
/// Returns an oid representing the hash of an empty tree.
#[inline]
pub(crate) fn empty_tree_sha1() -> &'static Self {
oid::from_bytes(b"\x4b\x82\x5d\xc6\x42\xcb\x6e\xb9\xa0\x60\xe5\x4b\xf8\xd6\x92\x88\xfb\xee\x49\x04")
}
}
impl AsRef<oid> for &oid {

View File

@@ -1,3 +1,5 @@
use gix_hash::{Kind, ObjectId};
mod from_hex_len {
use gix_hash::Kind;
@@ -14,3 +16,15 @@ mod from_hex_len {
assert_eq!(Kind::from_hex_len(65), None);
}
}
#[test]
fn empty_blob() {
let sha1 = Kind::Sha1;
assert_eq!(sha1.empty_blob(), ObjectId::empty_blob(sha1));
}
#[test]
fn empty_tree() {
let sha1 = Kind::Sha1;
assert_eq!(sha1.empty_tree(), ObjectId::empty_tree(sha1));
}

View File

@@ -19,3 +19,25 @@ fn is_null() {
assert!(gix_hash::Kind::Sha1.null().is_null());
assert!(gix_hash::Kind::Sha1.null().as_ref().is_null());
}
#[test]
fn is_empty_blob() {
let empty_blob = gix_hash::ObjectId::empty_blob(gix_hash::Kind::Sha1);
assert!(empty_blob.is_empty_blob());
assert!(empty_blob.as_ref().is_empty_blob());
let non_empty = gix_hash::Kind::Sha1.null();
assert!(!non_empty.is_empty_blob());
assert!(!non_empty.as_ref().is_empty_blob());
}
#[test]
fn is_empty_tree() {
let empty_tree = gix_hash::ObjectId::empty_tree(gix_hash::Kind::Sha1);
assert!(empty_tree.is_empty_tree());
assert!(empty_tree.as_ref().is_empty_tree());
let non_empty = gix_hash::Kind::Sha1.null();
assert!(!non_empty.is_empty_tree());
assert!(!non_empty.as_ref().is_empty_tree());
}

View File

@@ -36,7 +36,7 @@ fn run() -> crate::Result {
.iter()
.filter_map(|name| {
refs.try_find(*name).expect("one tag per commit").map(|mut r| {
r.peel_to_id_in_place(&refs, &store).expect("works");
r.peel_to_id(&refs, &store).expect("works");
r.target.into_id()
})
})

View File

@@ -373,11 +373,7 @@ fn mark_all_refs_in_repo(
let _span = gix_trace::detail!("mark_all_refs");
for local_ref in store.iter()?.all()? {
let mut local_ref = local_ref?;
let id = local_ref.peel_to_id_in_place_packed(
store,
objects,
store.cached_packed_buffer()?.as_ref().map(|b| &***b),
)?;
let id = local_ref.peel_to_id_packed(store, objects, store.cached_packed_buffer()?.as_ref().map(|b| &***b))?;
let mut is_complete = false;
if let Some(commit) = graph
.get_or_insert_commit(id, |md| {

View File

@@ -148,7 +148,7 @@ pub enum Kind {
Object,
/// A ref that points to another reference, adding a level of indirection.
///
/// It can be resolved to an id using the [`peel_in_place_to_id()`][`crate::file::ReferenceExt::peel_to_id_in_place()`] method.
/// It can be resolved to an id using the [`peel_to_id()`][`crate::file::ReferenceExt::peel_to_id()`] method.
Symbolic,
}

View File

@@ -2,7 +2,7 @@
pub mod to_id {
use gix_object::bstr::BString;
/// The error returned by [`crate::file::ReferenceExt::peel_to_id_in_place()`].
/// The error returned by [`crate::file::ReferenceExt::peel_to_id()`].
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
@@ -21,7 +21,7 @@ pub mod to_object {
use crate::file;
/// The error returned by [`file::ReferenceExt::follow_to_object_in_place_packed()`].
/// The error returned by [`file::ReferenceExt::follow_to_object_packed()`].
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {

View File

@@ -12,7 +12,7 @@ pub struct Reference {
pub target: Target,
/// The fully peeled object to which this reference ultimately points to after following all symbolic refs and all annotated
/// tags. Only guaranteed to be set after
/// [`Reference::peel_to_id_in_place()`](crate::file::ReferenceExt) was called or if this reference originated
/// [`Reference::peel_to_id()`](crate::file::ReferenceExt::peel_to_id) was called or if this reference originated
/// from a packed ref.
pub peeled: Option<ObjectId>,
}

View File

@@ -26,14 +26,31 @@ pub trait ReferenceExt: Sealed {
///
/// This is useful to learn where this reference is ultimately pointing to after following all symbolic
/// refs and all annotated tags to the first non-tag object.
#[deprecated = "Use `peel_to_id()` instead"]
fn peel_to_id_in_place(
&mut self,
store: &file::Store,
objects: &dyn gix_object::Find,
) -> Result<ObjectId, peel::to_id::Error>;
/// Follow all symbolic targets this reference might point to and peel the underlying object
/// to the end of the tag-chain, returning the first non-tag object the annotated tag points to,
/// using `objects` to access them and `store` to lookup symbolic references.
///
/// This is useful to learn where this reference is ultimately pointing to after following all symbolic
/// refs and all annotated tags to the first non-tag object.
///
/// Note that this method mutates `self` in place if it does not already point to a
/// non-symbolic object.
fn peel_to_id(
&mut self,
store: &file::Store,
objects: &dyn gix_object::Find,
) -> Result<ObjectId, peel::to_id::Error>;
/// Like [`ReferenceExt::peel_to_id_in_place()`], but with support for a known stable `packed` buffer
/// to use for resolving symbolic links.
#[deprecated = "Use `peel_to_id_packed()` instead"]
fn peel_to_id_in_place_packed(
&mut self,
store: &file::Store,
@@ -41,14 +58,32 @@ pub trait ReferenceExt: Sealed {
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_id::Error>;
/// Like [`ReferenceExt::peel_to_id()`], but with support for a known stable `packed` buffer to
/// use for resolving symbolic links.
fn peel_to_id_packed(
&mut self,
store: &file::Store,
objects: &dyn gix_object::Find,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_id::Error>;
/// Like [`ReferenceExt::follow()`], but follows all symbolic references while gracefully handling loops,
/// altering this instance in place.
#[deprecated = "Use `follow_to_object_packed()` instead"]
fn follow_to_object_in_place_packed(
&mut self,
store: &file::Store,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_object::Error>;
/// Like [`ReferenceExt::follow()`], but follows all symbolic references while gracefully handling loops,
/// altering this instance in place.
fn follow_to_object_packed(
&mut self,
store: &file::Store,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_object::Error>;
/// Follow this symbolic reference one level and return the ref it refers to.
///
/// Returns `None` if this is not a symbolic reference, hence the leaf of the chain.
@@ -84,13 +119,21 @@ impl ReferenceExt for Reference {
&mut self,
store: &file::Store,
objects: &dyn gix_object::Find,
) -> Result<ObjectId, peel::to_id::Error> {
self.peel_to_id(store, objects)
}
fn peel_to_id(
&mut self,
store: &file::Store,
objects: &dyn gix_object::Find,
) -> Result<ObjectId, peel::to_id::Error> {
let packed = store.assure_packed_refs_uptodate().map_err(|err| {
peel::to_id::Error::FollowToObject(peel::to_object::Error::Follow(file::find::existing::Error::Find(
file::find::Error::PackedOpen(err),
)))
})?;
self.peel_to_id_in_place_packed(store, objects, packed.as_ref().map(|b| &***b))
self.peel_to_id_packed(store, objects, packed.as_ref().map(|b| &***b))
}
fn peel_to_id_in_place_packed(
@@ -98,6 +141,15 @@ impl ReferenceExt for Reference {
store: &file::Store,
objects: &dyn gix_object::Find,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_id::Error> {
self.peel_to_id_packed(store, objects, packed)
}
fn peel_to_id_packed(
&mut self,
store: &file::Store,
objects: &dyn gix_object::Find,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_id::Error> {
match self.peeled {
Some(peeled) => {
@@ -105,7 +157,7 @@ impl ReferenceExt for Reference {
Ok(peeled)
}
None => {
let mut oid = self.follow_to_object_in_place_packed(store, packed)?;
let mut oid = self.follow_to_object_packed(store, packed)?;
let mut buf = Vec::new();
let peeled_id = loop {
let gix_object::Data { kind, data } =
@@ -138,6 +190,14 @@ impl ReferenceExt for Reference {
&mut self,
store: &file::Store,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_object::Error> {
self.follow_to_object_packed(store, packed)
}
fn follow_to_object_packed(
&mut self,
store: &file::Store,
packed: Option<&packed::Buffer>,
) -> Result<ObjectId, peel::to_object::Error> {
match self.target {
Target::Object(id) => Ok(id),

View File

@@ -81,11 +81,11 @@ mod peel {
let store = store_with_packed_refs()?;
let mut head: Reference = store.find_loose("HEAD")?.into();
let expected = hex_to_id("134385f6d781b7e97062102c6a483440bfda2a03");
assert_eq!(head.peel_to_id_in_place(&store, &EmptyCommit)?, expected);
assert_eq!(head.peel_to_id(&store, &EmptyCommit)?, expected);
assert_eq!(head.target.try_id().map(ToOwned::to_owned), Some(expected));
let mut head = store.find("dt1")?;
assert_eq!(head.peel_to_id_in_place(&store, &gix_object::find::Never)?, expected);
assert_eq!(head.peel_to_id(&store, &gix_object::find::Never)?, expected);
assert_eq!(head.target.into_id(), expected);
Ok(())
}
@@ -113,7 +113,7 @@ mod peel {
"but following doesn't do that, only real peeling does"
);
head.peel_to_id_in_place(&store, &EmptyCommit)?;
head.peel_to_id(&store, &EmptyCommit)?;
assert_eq!(
head.target.try_id().map(ToOwned::to_owned),
Some(final_stop),
@@ -135,23 +135,19 @@ mod peel {
assert_eq!(r.kind(), gix_ref::Kind::Symbolic, "there is something to peel");
let commit = hex_to_id("134385f6d781b7e97062102c6a483440bfda2a03");
assert_eq!(r.peel_to_id_in_place(&store, &EmptyCommit)?, commit);
assert_eq!(r.peel_to_id(&store, &EmptyCommit)?, commit);
assert_eq!(r.name.as_bstr(), "refs/remotes/origin/multi-link-target3");
let mut r: Reference = store.find_loose("dt1")?.into();
assert_eq!(
r.peel_to_id_in_place(&store, &EmptyCommit)?,
r.peel_to_id(&store, &EmptyCommit)?,
hex_to_id("4c3f4cce493d7beb45012e478021b5f65295e5a3"),
"points to a tag object without actual object lookup"
);
let odb = gix_odb::at(store.git_dir().join("objects"))?;
let mut r: Reference = store.find_loose("dt1")?.into();
assert_eq!(
r.peel_to_id_in_place(&store, &odb)?,
commit,
"points to the commit with lookup"
);
assert_eq!(r.peel_to_id(&store, &odb)?, commit, "points to the commit with lookup");
Ok(())
}
@@ -162,7 +158,7 @@ mod peel {
let store = file::store_at_with_args("make_multi_hop_ref.sh", packed)?;
let odb = gix_odb::at(store.git_dir().join("objects"))?;
let mut r: Reference = store.find("multi-hop")?;
r.peel_to_id_in_place(&store, &odb)?;
r.peel_to_id(&store, &odb)?;
let commit_id = hex_to_id("134385f6d781b7e97062102c6a483440bfda2a03");
assert_eq!(r.peeled, Some(commit_id));
@@ -172,8 +168,7 @@ mod peel {
assert_eq!(obj.kind, gix_object::Kind::Commit, "always peeled to the first non-tag");
let mut r: Reference = store.find("multi-hop")?;
let tag_id =
r.follow_to_object_in_place_packed(&store, store.cached_packed_buffer()?.as_ref().map(|p| &***p))?;
let tag_id = r.follow_to_object_packed(&store, store.cached_packed_buffer()?.as_ref().map(|p| &***p))?;
let obj = odb.find(&tag_id, &mut buf)?;
assert_eq!(obj.kind, gix_object::Kind::Tag, "the first direct object target");
assert_eq!(
@@ -183,7 +178,7 @@ mod peel {
);
let mut r: Reference = store.find("multi-hop2")?;
let other_tag_id =
r.follow_to_object_in_place_packed(&store, store.cached_packed_buffer()?.as_ref().map(|p| &***p))?;
r.follow_to_object_packed(&store, store.cached_packed_buffer()?.as_ref().map(|p| &***p))?;
assert_eq!(other_tag_id, tag_id, "it can follow with multiple hops as well");
}
Ok(())
@@ -197,14 +192,14 @@ mod peel {
assert_eq!(r.name.as_bstr(), "refs/loop-a");
assert!(matches!(
r.peel_to_id_in_place(&store, &gix_object::find::Never).unwrap_err(),
r.peel_to_id(&store, &gix_object::find::Never).unwrap_err(),
gix_ref::peel::to_id::Error::FollowToObject(gix_ref::peel::to_object::Error::Cycle { .. })
));
assert_eq!(r.name.as_bstr(), "refs/loop-a", "the ref is not changed on error");
let mut r: Reference = store.find_loose("loop-a")?.into();
let err = r
.follow_to_object_in_place_packed(&store, store.cached_packed_buffer()?.as_ref().map(|p| &***p))
.follow_to_object_packed(&store, store.cached_packed_buffer()?.as_ref().map(|p| &***p))
.unwrap_err();
assert!(matches!(err, gix_ref::peel::to_object::Error::Cycle { .. }));
Ok(())

View File

@@ -60,7 +60,7 @@ fn into_peel(
store: &gix_ref::file::Store,
odb: gix_odb::Handle,
) -> impl Fn(gix_ref::Reference) -> gix_hash::ObjectId + '_ {
move |mut r: gix_ref::Reference| r.peel_to_id_in_place(store, &odb).unwrap()
move |mut r: gix_ref::Reference| r.peel_to_id(store, &odb).unwrap()
}
enum Mode {

View File

@@ -97,8 +97,8 @@ pub mod main_worktree {
})?;
let root_tree_id = match &self.ref_name {
Some(reference_val) => Some(repo.find_reference(reference_val)?.peel_to_id_in_place()?),
None => repo.head()?.try_peel_to_id_in_place()?,
Some(reference_val) => Some(repo.find_reference(reference_val)?.peel_to_id()?),
None => repo.head()?.try_peel_to_id()?,
};
let root_tree = match root_tree_id {

View File

@@ -6,7 +6,7 @@ use std::convert::Infallible;
/// An empty array of a type usable with the `gix::easy` API to help declaring no parents should be used
pub const NO_PARENT_IDS: [gix_hash::ObjectId; 0] = [];
/// The error returned by [`commit(…)`][crate::Repository::commit()].
/// The error returned by [`commit(…)`](crate::Repository::commit()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
@@ -122,7 +122,7 @@ pub mod describe {
.filter_map(Result::ok)
.filter_map(|mut r: crate::Reference<'_>| {
let target_id = r.target().try_id().map(ToOwned::to_owned);
let peeled_id = r.peel_to_id_in_place().ok()?;
let peeled_id = r.peel_to_id().ok()?;
let (prio, tag_time) = match target_id {
Some(target_id) if peeled_id != *target_id => {
let tag = repo.find_object(target_id).ok()?.try_into_tag().ok()?;

View File

@@ -349,7 +349,7 @@ impl Cache {
if let Ok(mut head) = repo.head() {
let ctx = filters.driver_context_mut();
ctx.ref_name = head.referent_name().map(|name| name.as_bstr().to_owned());
ctx.treeish = head.peel_to_commit_in_place().ok().map(|commit| commit.id);
ctx.treeish = head.peel_to_commit().ok().map(|commit| commit.id);
}
filters
};

View File

@@ -7,8 +7,8 @@ use crate::{
mod error {
use crate::{object, reference};
/// The error returned by [`Head::peel_to_id_in_place()`](super::Head::try_peel_to_id_in_place())
/// and [`Head::into_fully_peeled_id()`](super::Head::try_into_peeled_id()).
/// The error returned by [`Head::peel_to_id()`](super::Head::try_peel_to_id()) and
/// [`Head::into_fully_peeled_id()`](super::Head::try_into_peeled_id()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
@@ -42,7 +42,7 @@ pub mod into_id {
pub mod to_commit {
use crate::object;
/// The error returned by [`Head::peel_to_commit_in_place()`](super::Head::peel_to_commit_in_place()).
/// The error returned by [`Head::peel_to_commit()`](super::Head::peel_to_commit()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
@@ -55,7 +55,7 @@ pub mod to_commit {
///
pub mod to_object {
/// The error returned by [`Head::peel_to_object_in_place()`](super::Head::peel_to_object_in_place()).
/// The error returned by [`Head::peel_to_object()`](super::Head::peel_to_object()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
@@ -72,7 +72,7 @@ impl<'repo> Head<'repo> {
/// The final target is obtained by following symbolic references and peeling tags to their final destination, which
/// typically is a commit, but can be any object.
pub fn into_peeled_id(mut self) -> Result<crate::Id<'repo>, into_id::Error> {
self.try_peel_to_id_in_place()?;
self.try_peel_to_id()?;
self.id().ok_or_else(|| match self.kind {
Kind::Symbolic(gix_ref::Reference { name, .. }) | Kind::Unborn(name) => into_id::Error::Unborn { name },
Kind::Detached { .. } => unreachable!("id can be returned after peeling"),
@@ -84,7 +84,7 @@ impl<'repo> Head<'repo> {
/// The final target is obtained by following symbolic references and peeling tags to their final destination, which
/// typically is a commit, but can be any object as well.
pub fn into_peeled_object(mut self) -> Result<crate::Object<'repo>, to_object::Error> {
self.peel_to_object_in_place()
self.peel_to_object()
}
/// Consume this instance and transform it into the final object that it points to, or `Ok(None)` if the `HEAD`
@@ -93,7 +93,7 @@ impl<'repo> Head<'repo> {
/// The final target is obtained by following symbolic references and peeling tags to their final destination, which
/// typically is a commit, but can be any object.
pub fn try_into_peeled_id(mut self) -> Result<Option<crate::Id<'repo>>, Error> {
self.try_peel_to_id_in_place()
self.try_peel_to_id()
}
/// Follow the symbolic reference of this head until its target object and peel it by following tag objects until there is no
@@ -103,7 +103,21 @@ impl<'repo> Head<'repo> {
///
/// The final target is obtained by following symbolic references and peeling tags to their final destination, which
/// typically is a commit, but can be any object.
#[deprecated = "Use `try_peel_to_id()` instead"]
pub fn try_peel_to_id_in_place(&mut self) -> Result<Option<crate::Id<'repo>>, Error> {
self.try_peel_to_id()
}
/// Follow the symbolic reference of this head until its target object and peel it by following tag objects until there is no
/// more object to follow, and return that object id.
///
/// Returns `Ok(None)` if the head is unborn.
///
/// The final target is obtained by following symbolic references and peeling tags to their final destination, which
/// typically is a commit, but can be any object.
///
/// Note that this method mutates `self` in place.
pub fn try_peel_to_id(&mut self) -> Result<Option<crate::Id<'repo>>, Error> {
Ok(Some(match &mut self.kind {
Kind::Unborn(_name) => return Ok(None),
Kind::Detached {
@@ -128,7 +142,7 @@ impl<'repo> Head<'repo> {
}
Kind::Symbolic(r) => {
let mut nr = r.clone().attach(self.repo);
let peeled = nr.peel_to_id_in_place();
let peeled = nr.peel_to_id();
*r = nr.detach();
peeled?
}
@@ -139,12 +153,21 @@ impl<'repo> Head<'repo> {
/// more object to follow, transform the id into a commit if possible and return that.
///
/// Returns an error if the head is unborn or if it doesn't point to a commit.
#[deprecated = "Use `peel_to_object()` instead"]
pub fn peel_to_object_in_place(&mut self) -> Result<crate::Object<'repo>, to_object::Error> {
let id = self
.try_peel_to_id_in_place()?
.ok_or_else(|| to_object::Error::Unborn {
name: self.referent_name().expect("unborn").to_owned(),
})?;
self.peel_to_object()
}
/// Follow the symbolic reference of this head until its target object and peel it by following tag objects until there is no
/// more object to follow, transform the id into a commit if possible and return that.
///
/// Returns an error if the head is unborn or if it doesn't point to a commit.
///
/// Note that this method mutates `self` in place.
pub fn peel_to_object(&mut self) -> Result<crate::Object<'repo>, to_object::Error> {
let id = self.try_peel_to_id()?.ok_or_else(|| to_object::Error::Unborn {
name: self.referent_name().expect("unborn").to_owned(),
})?;
id.object()
.map_err(|err| to_object::Error::Peel(Error::FindExistingObject(err)))
}
@@ -153,7 +176,18 @@ impl<'repo> Head<'repo> {
/// more object to follow, transform the id into a commit if possible and return that.
///
/// Returns an error if the head is unborn or if it doesn't point to a commit.
#[deprecated = "Use `peel_to_commit()` instead"]
pub fn peel_to_commit_in_place(&mut self) -> Result<crate::Commit<'repo>, to_commit::Error> {
Ok(self.peel_to_object_in_place()?.try_into_commit()?)
self.peel_to_commit()
}
/// Follow the symbolic reference of this head until its target object and peel it by following tag objects until there is no
/// more object to follow, transform the id into a commit if possible and return that.
///
/// Returns an error if the head is unborn or if it doesn't point to a commit.
///
/// Note that this method mutates `self` in place.
pub fn peel_to_commit(&mut self) -> Result<crate::Commit<'repo>, to_commit::Error> {
Ok(self.peel_to_object()?.try_into_commit()?)
}
}

View File

@@ -86,7 +86,7 @@
doc = ::document_features::document_features!()
)]
#![cfg_attr(all(doc, feature = "document-features"), feature(doc_cfg, doc_auto_cfg))]
#![deny(missing_docs, rust_2018_idioms, unsafe_code)]
#![deny(missing_docs, unsafe_code)]
#![allow(clippy::result_large_err)]
// Re-exports to make this a potential one-stop shop crate avoiding people from having to reference various crates themselves.

View File

@@ -150,6 +150,23 @@ impl std::fmt::Debug for Object<'_> {
}
}
/// Note that the `data` written here might not correspond to the `id` of the `Blob` anymore if it was modified.
/// Also, this is merely for convenience when writing empty blobs to the ODB. For writing any blob, use
/// [`Repository::write_blob()`](crate::Repository::write_blob()).
impl gix_object::WriteTo for Blob<'_> {
fn write_to(&self, out: &mut dyn std::io::Write) -> std::io::Result<()> {
out.write_all(&self.data)
}
fn kind(&self) -> gix_object::Kind {
gix_object::Kind::Blob
}
fn size(&self) -> u64 {
self.data.len() as u64
}
}
/// In conjunction with the handles free list, leaving an empty Vec in place of the original causes it to not be
/// returned to the free list.
fn steal_from_freelist(data: &mut Vec<u8>) -> Vec<u8> {

View File

@@ -22,8 +22,8 @@ pub mod edit {
///
pub mod peel {
/// The error returned by [`Reference::peel_to_id_in_place(…)`](crate::Reference::peel_to_id_in_place()) and
/// [`Reference::into_fully_peeled_id()`](crate::Reference::into_fully_peeled_id()).
/// The error returned by [`Reference::peel_to_id()`](crate::Reference::peel_to_id()) and
/// [`Reference::into_fully_peeled_id()`](crate::Reference::into_fully_peeled_id()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {

View File

@@ -90,7 +90,7 @@ impl<'repo> Platform<'repo> {
impl Iter<'_, '_> {
/// Automatically peel references before yielding them during iteration.
///
/// This has the same effect as using `iter.map(|r| {r.peel_to_id_in_place(); r})`.
/// This has the same effect as using `iter.map(|r| {r.peel_to_id(); r})`.
///
/// # Note
///
@@ -112,13 +112,9 @@ impl<'r> Iterator for Iter<'_, 'r> {
.and_then(|mut r| {
if self.peel {
let repo = &self.repo;
r.peel_to_id_in_place_packed(
&repo.refs,
&repo.objects,
self.peel_with_packed.as_ref().map(|p| &***p),
)
.map_err(|err| Box::new(err) as Box<dyn std::error::Error + Send + Sync + 'static>)
.map(|_| r)
r.peel_to_id_packed(&repo.refs, &repo.objects, self.peel_with_packed.as_ref().map(|p| &***p))
.map_err(|err| Box::new(err) as Box<dyn std::error::Error + Send + Sync + 'static>)
.map(|_| r)
} else {
Ok(r)
}

View File

@@ -65,12 +65,26 @@ impl<'repo> Reference<'repo> {
/// Peeling
impl<'repo> Reference<'repo> {
/// Follow all symbolic targets this reference might point to and peel all annotated tags
/// to their first non-tag target, and return it,
/// to their first non-tag target, and return it.
///
/// This is useful to learn where this reference is ultimately pointing to after following
/// the chain of symbolic refs and annotated tags.
#[deprecated = "Use `peel_to_id()` instead"]
pub fn peel_to_id_in_place(&mut self) -> Result<Id<'repo>, peel::Error> {
let oid = self.inner.peel_to_id_in_place(&self.repo.refs, &self.repo.objects)?;
let oid = self.inner.peel_to_id(&self.repo.refs, &self.repo.objects)?;
Ok(Id::from_id(oid, self.repo))
}
/// Follow all symbolic targets this reference might point to and peel all annotated tags
/// to their first non-tag target, and return it.
///
/// This is useful to learn where this reference is ultimately pointing to after following
/// the chain of symbolic refs and annotated tags.
///
/// Note that this method mutates `self` in place if it does not already point to a
/// non-symbolic object.
pub fn peel_to_id(&mut self) -> Result<Id<'repo>, peel::Error> {
let oid = self.inner.peel_to_id(&self.repo.refs, &self.repo.objects)?;
Ok(Id::from_id(oid, self.repo))
}
@@ -79,26 +93,45 @@ impl<'repo> Reference<'repo> {
///
/// This is useful to learn where this reference is ultimately pointing to after following
/// the chain of symbolic refs and annotated tags.
#[deprecated = "Use `peel_to_id_packed()` instead"]
pub fn peel_to_id_in_place_packed(
&mut self,
packed: Option<&gix_ref::packed::Buffer>,
) -> Result<Id<'repo>, peel::Error> {
let oid = self
.inner
.peel_to_id_in_place_packed(&self.repo.refs, &self.repo.objects, packed)?;
.peel_to_id_packed(&self.repo.refs, &self.repo.objects, packed)?;
Ok(Id::from_id(oid, self.repo))
}
/// Similar to [`peel_to_id_in_place()`](Reference::peel_to_id_in_place()), but consumes this instance.
/// Follow all symbolic targets this reference might point to and peel all annotated tags
/// to their first non-tag target, and return it, reusing the `packed` buffer if available.
///
/// This is useful to learn where this reference is ultimately pointing to after following
/// the chain of symbolic refs and annotated tags.
///
/// Note that this method mutates `self` in place if it does not already point to a
/// non-symbolic object.
pub fn peel_to_id_packed(&mut self, packed: Option<&gix_ref::packed::Buffer>) -> Result<Id<'repo>, peel::Error> {
let oid = self
.inner
.peel_to_id_packed(&self.repo.refs, &self.repo.objects, packed)?;
Ok(Id::from_id(oid, self.repo))
}
/// Similar to [`peel_to_id()`](Reference::peel_to_id()), but consumes this instance.
pub fn into_fully_peeled_id(mut self) -> Result<Id<'repo>, peel::Error> {
self.peel_to_id_in_place()
self.peel_to_id()
}
/// Follow this reference's target until it points at an object directly, and peel that object until
/// its type matches the given `kind`. It's an error to try to peel to a kind that this ref doesn't point to.
///
/// Note that this ref will point to the first target object afterward, which may be a tag. This is different
/// from [`peel_to_id_in_place()`](Self::peel_to_id_in_place()) where it will point to the first non-tag object.
/// from [`peel_to_id()`](Self::peel_to_id()) where it will point to the first non-tag object.
///
/// Note that `git2::Reference::peel` does not "peel in place", but returns a new object
/// instead.
#[doc(alias = "peel", alias = "git2")]
pub fn peel_to_kind(&mut self, kind: gix_object::Kind) -> Result<Object<'repo>, peel::to_kind::Error> {
let packed = self.repo.refs.cached_packed_buffer().map_err(|err| {
@@ -147,7 +180,7 @@ impl<'repo> Reference<'repo> {
) -> Result<Object<'repo>, peel::to_kind::Error> {
let target = self
.inner
.follow_to_object_in_place_packed(&self.repo.refs, packed)?
.follow_to_object_packed(&self.repo.refs, packed)?
.attach(self.repo);
Ok(target.object()?.peel_to_kind(kind)?)
}
@@ -175,7 +208,7 @@ impl<'repo> Reference<'repo> {
) -> Result<Id<'repo>, follow::to_object::Error> {
Ok(self
.inner
.follow_to_object_in_place_packed(&self.repo.refs, packed)?
.follow_to_object_packed(&self.repo.refs, packed)?
.attach(self.repo))
}

View File

@@ -2,8 +2,47 @@ use crate::{bstr::BStr, remote, Remote};
/// Builder methods
impl Remote<'_> {
/// Override the `url` to be used when fetching data from a remote.
///
/// Note that this URL is typically set during instantiation with [`crate::Repository::remote_at()`].
pub fn with_url<Url, E>(self, url: Url) -> Result<Self, remote::init::Error>
where
Url: TryInto<gix_url::Url, Error = E>,
gix_url::parse::Error: From<E>,
{
self.url_inner(
url.try_into().map_err(|err| remote::init::Error::Url(err.into()))?,
true,
)
}
/// Set the `url` to be used when fetching data from a remote, without applying rewrite rules in case these could be faulty,
/// eliminating one failure mode.
///
/// Note that this URL is typically set during instantiation with [`crate::Repository::remote_at_without_url_rewrite()`].
pub fn with_url_without_url_rewrite<Url, E>(self, url: Url) -> Result<Self, remote::init::Error>
where
Url: TryInto<gix_url::Url, Error = E>,
gix_url::parse::Error: From<E>,
{
self.url_inner(
url.try_into().map_err(|err| remote::init::Error::Url(err.into()))?,
false,
)
}
/// Set the `url` to be used when pushing data to a remote.
#[deprecated = "Use `with_push_url()` instead"]
pub fn push_url<Url, E>(self, url: Url) -> Result<Self, remote::init::Error>
where
Url: TryInto<gix_url::Url, Error = E>,
gix_url::parse::Error: From<E>,
{
self.with_push_url(url)
}
/// Set the `url` to be used when pushing data to a remote.
pub fn with_push_url<Url, E>(self, url: Url) -> Result<Self, remote::init::Error>
where
Url: TryInto<gix_url::Url, Error = E>,
gix_url::parse::Error: From<E>,
@@ -16,7 +55,18 @@ impl Remote<'_> {
/// Set the `url` to be used when pushing data to a remote, without applying rewrite rules in case these could be faulty,
/// eliminating one failure mode.
#[deprecated = "Use `with_push_url_without_rewrite()` instead"]
pub fn push_url_without_url_rewrite<Url, E>(self, url: Url) -> Result<Self, remote::init::Error>
where
Url: TryInto<gix_url::Url, Error = E>,
gix_url::parse::Error: From<E>,
{
self.with_push_url_without_url_rewrite(url)
}
/// Set the `url` to be used when pushing data to a remote, without applying rewrite rules in case these could be faulty,
/// eliminating one failure mode.
pub fn with_push_url_without_url_rewrite<Url, E>(self, url: Url) -> Result<Self, remote::init::Error>
where
Url: TryInto<gix_url::Url, Error = E>,
gix_url::parse::Error: From<E>,
@@ -50,6 +100,19 @@ impl Remote<'_> {
Ok(self)
}
fn url_inner(mut self, url: gix_url::Url, should_rewrite_urls: bool) -> Result<Self, remote::init::Error> {
self.url = url.into();
let (fetch_url_alias, _) = if should_rewrite_urls {
remote::init::rewrite_urls(&self.repo.config, self.url.as_ref(), None)
} else {
Ok((None, None))
}?;
self.url_alias = fetch_url_alias;
Ok(self)
}
/// Add `specs` as refspecs for `direction` to our list if they are unique, or ignore them otherwise.
pub fn with_refspecs<Spec>(
mut self,

View File

@@ -127,7 +127,7 @@ pub(crate) fn update(
match existing
.try_id()
.map_or_else(|| existing.clone().peel_to_id_in_place(), Ok)
.map_or_else(|| existing.clone().peel_to_id(), Ok)
.map(crate::Id::detach)
{
Ok(local_id) => {

View File

@@ -945,7 +945,7 @@ mod update {
},
TargetRef::Symbolic(name) => {
let target = name.as_bstr().into();
match r.peel_to_id_in_place() {
match r.peel_to_id() {
Ok(id) => gix_protocol::handshake::Ref::Symbolic {
full_ref_name,
target,

View File

@@ -36,7 +36,7 @@ impl crate::Repository {
None => {
blob_id = blob_id.or_else(|| {
self.head().ok().and_then(|mut head| {
let commit = head.peel_to_commit_in_place().ok()?;
let commit = head.peel_to_commit().ok()?;
let tree = commit.tree().ok()?;
tree.find_entry(".mailmap").map(|e| e.object_id())
})

View File

@@ -63,6 +63,36 @@ mod submodule;
mod thread_safe;
mod worktree;
///
mod new_commit {
/// The error returned by [`new_commit(…)`](crate::Repository::new_commit()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
#[error(transparent)]
ParseTime(#[from] crate::config::time::Error),
#[error("Committer identity is not configured")]
CommitterMissing,
#[error("Author identity is not configured")]
AuthorMissing,
#[error(transparent)]
NewCommitAs(#[from] crate::repository::new_commit_as::Error),
}
}
///
mod new_commit_as {
/// The error returned by [`new_commit_as(…)`](crate::Repository::new_commit_as()).
#[derive(Debug, thiserror::Error)]
#[allow(missing_docs)]
pub enum Error {
#[error(transparent)]
WriteObject(#[from] crate::object::write::Error),
#[error(transparent)]
FindCommit(#[from] crate::object::find::existing::Error),
}
}
///
#[cfg(feature = "blame")]
pub mod blame_file {

View File

@@ -10,6 +10,7 @@ use gix_ref::{
};
use smallvec::SmallVec;
use crate::repository::{new_commit, new_commit_as};
use crate::{commit, ext::ObjectIdExt, object, tag, Blob, Commit, Id, Object, Reference, Tag, Tree};
/// Tree editing
@@ -86,6 +87,8 @@ impl crate::Repository {
/// Obtain information about an object without fully decoding it, or fail if the object doesn't exist.
///
/// Note that despite being cheaper than [`Self::find_object()`], there is still some effort traversing delta-chains.
/// Also note that for empty trees and blobs, it will always report it to exist in loose objects, even if they don't
/// exist or if they exist in a pack.
#[doc(alias = "read_header", alias = "git2")]
pub fn find_header(&self, id: impl Into<ObjectId>) -> Result<gix_odb::find::Header, object::find::existing::Error> {
let id = id.into();
@@ -234,7 +237,7 @@ impl crate::Repository {
/// Create a tag reference named `name` (without `refs/tags/` prefix) pointing to a newly created tag object
/// which in turn points to `target` and return the newly created reference.
///
/// It will be created with `constraint` which is most commonly to [only create it][PreviousValue::MustNotExist]
/// It will be created with `constraint` which is most commonly to [only create it](PreviousValue::MustNotExist)
/// or to [force overwriting a possibly existing tag](PreviousValue::Any).
pub fn tag(
&self,
@@ -376,6 +379,49 @@ impl crate::Repository {
self.commit_as(committer, author, reference, message, tree, parents)
}
/// Create a new commit object with `message` referring to `tree` with `parents`, and write it to the object database.
/// Do not, however, update any references.
///
/// The commit is created without message encoding field, which can be assumed to be UTF-8.
/// `author` and `committer` fields are pre-set from the configuration, which can be altered
/// [temporarily](crate::Repository::config_snapshot_mut()) before the call if required.
pub fn new_commit(
&self,
message: impl AsRef<str>,
tree: impl Into<ObjectId>,
parents: impl IntoIterator<Item = impl Into<ObjectId>>,
) -> Result<Commit<'_>, new_commit::Error> {
let author = self.author().ok_or(new_commit::Error::AuthorMissing)??;
let committer = self.committer().ok_or(new_commit::Error::CommitterMissing)??;
Ok(self.new_commit_as(committer, author, message, tree, parents)?)
}
/// Create a nwe commit object with `message` referring to `tree` with `parents`, using the specified
/// `committer` and `author`, and write it to the object database. Do not, however, update any references.
///
/// This forces setting the commit time and author time by hand. Note that typically, committer and author are the same.
/// The commit is created without message encoding field, which can be assumed to be UTF-8.
pub fn new_commit_as<'a, 'c>(
&self,
committer: impl Into<gix_actor::SignatureRef<'c>>,
author: impl Into<gix_actor::SignatureRef<'a>>,
message: impl AsRef<str>,
tree: impl Into<ObjectId>,
parents: impl IntoIterator<Item = impl Into<ObjectId>>,
) -> Result<Commit<'_>, new_commit_as::Error> {
let commit = gix_object::Commit {
message: message.as_ref().into(),
tree: tree.into(),
author: author.into().into(),
committer: committer.into().into(),
encoding: None,
parents: parents.into_iter().map(Into::into).collect(),
extra_headers: Default::default(),
};
let id = self.write_object(commit)?;
Ok(id.object()?.into_commit())
}
/// Return an empty tree object, suitable for [getting changes](Tree::changes()).
///
/// Note that the returned object is special and doesn't necessarily physically exist in the object database.
@@ -392,7 +438,7 @@ impl crate::Repository {
/// This means that this object can be used in an uninitialized, empty repository which would report to have no objects at all.
pub fn empty_blob(&self) -> Blob<'_> {
Blob {
id: gix_hash::ObjectId::empty_blob(self.object_hash()),
id: self.object_hash().empty_blob(),
data: Vec::new(),
repo: self,
}

View File

@@ -214,7 +214,7 @@ impl crate::Repository {
/// is freshly initialized and doesn't have any commits yet. It could also fail if the
/// head does not point to a commit.
pub fn head_commit(&self) -> Result<crate::Commit<'_>, reference::head_commit::Error> {
Ok(self.head()?.peel_to_commit_in_place()?)
Ok(self.head()?.peel_to_commit()?)
}
/// Return the tree id the `HEAD` reference currently points to after peeling it fully,

View File

@@ -55,7 +55,7 @@ impl Repository {
Some(id) => id,
None => match self
.head()?
.try_peel_to_id_in_place()?
.try_peel_to_id()?
.map(|id| -> Result<Option<_>, submodule::modules::Error> {
Ok(id
.object()?

View File

@@ -201,7 +201,7 @@ impl Delegate<'_> {
for (r, obj) in self.refs.iter().zip(self.objs.iter_mut()) {
if let (Some(ref_), obj_opt @ None) = (r, obj) {
if let Some(id) = ref_.target.try_id().map(ToOwned::to_owned).or_else(|| {
match ref_.clone().attach(repo).peel_to_id_in_place() {
match ref_.clone().attach(repo).peel_to_id() {
Err(err) => {
self.err.push(Error::PeelToId {
name: ref_.name.clone(),

View File

@@ -214,7 +214,7 @@ impl delegate::Revision for Delegate<'_> {
Ok(Some((ref_name, id))) => {
let id = match self.repo.find_reference(ref_name.as_bstr()) {
Ok(mut r) => {
let id = r.peel_to_id_in_place().map(crate::Id::detach).unwrap_or(id);
let id = r.peel_to_id().map(crate::Id::detach).unwrap_or(id);
self.refs[self.idx] = Some(r.detach());
id
}

View File

@@ -14,8 +14,8 @@ mod peel {
assert_eq!(repo.head_tree_id()?, commit.tree_id()?);
assert_eq!(repo.head_tree_id_or_empty()?, commit.tree_id()?);
assert_eq!(repo.head()?.try_into_peeled_id()?.expect("born"), expected_commit);
assert_eq!(repo.head()?.peel_to_object_in_place()?.id, expected_commit);
assert_eq!(repo.head()?.try_peel_to_id_in_place()?.expect("born"), expected_commit);
assert_eq!(repo.head()?.peel_to_object()?.id, expected_commit);
assert_eq!(repo.head()?.try_peel_to_id()?.expect("born"), expected_commit);
}
Ok(())
}

View File

@@ -66,12 +66,12 @@ mod find {
"it points to a tag object"
);
let object = packed_tag_ref.peel_to_id_in_place()?;
let object = packed_tag_ref.peel_to_id()?;
let the_commit = hex_to_id("134385f6d781b7e97062102c6a483440bfda2a03");
assert_eq!(object, the_commit, "it is assumed to be fully peeled");
assert_eq!(
object,
packed_tag_ref.peel_to_id_in_place()?,
packed_tag_ref.peel_to_id()?,
"peeling again yields the same object"
);
@@ -79,7 +79,7 @@ mod find {
let expected: &FullNameRef = "refs/heads/multi-link-target1".try_into()?;
assert_eq!(symbolic_ref.name(), expected);
assert_eq!(symbolic_ref.peel_to_id_in_place()?, the_commit);
assert_eq!(symbolic_ref.peel_to_id()?, the_commit);
let expected: &FullNameRef = "refs/remotes/origin/multi-link-target3".try_into()?;
assert_eq!(symbolic_ref.name(), expected, "it follows symbolic refs, too");

View File

@@ -63,7 +63,7 @@ mod save_as_to {
let repo = basic_repo()?;
let mut remote = repo
.remote_at("https://example.com/path")?
.push_url("https://ein.hub/path")?
.with_push_url("https://ein.hub/path")?
.with_fetch_tags(gix::remote::fetch::Tags::All)
.with_refspecs(
[

View File

@@ -1,6 +1,9 @@
use gix_date::parse::TimeBuf;
use gix_odb::Header;
use gix_pack::Find;
use gix_testtools::tempfile;
use crate::util::named_subrepo_opts;
use crate::util::{hex_to_id, named_subrepo_opts};
mod object_database_impl {
use gix_object::{Exists, Find, FindHeader};
@@ -259,7 +262,7 @@ mod write_object {
let oid = repo.write_object(gix::objs::TreeRef::empty())?;
assert_eq!(
oid,
gix::hash::ObjectId::empty_tree(repo.object_hash()),
repo.object_hash().empty_tree(),
"it produces a well-known empty tree id"
);
Ok(())
@@ -274,7 +277,7 @@ mod write_object {
time: Default::default(),
};
let commit = gix::objs::Commit {
tree: gix::hash::ObjectId::empty_tree(repo.object_hash()),
tree: repo.object_hash().empty_tree(),
author: actor.clone(),
committer: actor,
parents: Default::default(),
@@ -289,6 +292,21 @@ mod write_object {
);
Ok(())
}
#[test]
fn blob_write_to_implementation() -> crate::Result {
let repo = empty_bare_in_memory_repo()?;
let blob = repo.empty_blob();
// Create a blob directly to test our WriteTo implementation
let actual_id = repo.write_object(&blob)?;
let actual_blob = repo.find_object(actual_id)?.into_blob();
assert_eq!(actual_id, repo.object_hash().empty_blob());
assert_eq!(actual_blob.data, blob.data);
Ok(())
}
}
mod write_blob {
@@ -429,6 +447,7 @@ mod find {
use gix_pack::Find;
use crate::basic_repo;
use crate::repository::object::empty_bare_in_memory_repo;
#[test]
fn find_and_try_find_with_and_without_object_cache() -> crate::Result {
@@ -507,6 +526,86 @@ mod find {
);
Ok(())
}
#[test]
fn empty_blob_can_be_found_if_it_exists() -> crate::Result {
let repo = basic_repo()?;
let empty_blob = gix::hash::ObjectId::empty_blob(repo.object_hash());
assert_eq!(
repo.find_object(empty_blob)?.into_blob().data.len(),
0,
"The basic_repo fixture contains an empty blob"
);
assert!(repo.has_object(empty_blob));
assert_eq!(
repo.find_header(empty_blob)?,
gix_odb::find::Header::Loose {
kind: gix_object::Kind::Blob,
size: 0,
},
"empty blob is found when it exists in the repository"
);
assert_eq!(
repo.try_find_object(empty_blob)?
.expect("present")
.into_blob()
.data
.len(),
0
);
assert_eq!(
repo.try_find_header(empty_blob)?,
Some(gix_odb::find::Header::Loose {
kind: gix_object::Kind::Blob,
size: 0,
}),
"empty blob is found when it exists in the repository"
);
Ok(())
}
#[test]
fn empty_blob() -> crate::Result {
let repo = empty_bare_in_memory_repo()?;
let empty_blob = repo.empty_blob();
assert_eq!(empty_blob.id, repo.object_hash().empty_blob());
assert_eq!(empty_blob.data.len(), 0);
assert!(!repo.has_object(empty_blob.id), "it doesn't exist by default");
repo.write_blob(&empty_blob.data)?;
assert!(repo.has_object(empty_blob.id), "it exists after it was written");
Ok(())
}
}
#[test]
fn empty_objects_are_always_present_but_not_in_plumbing() -> crate::Result {
let repo = empty_bare_in_memory_repo()?;
let empty_blob_id = repo.object_hash().empty_blob();
assert!(
!repo.has_object(empty_blob_id),
"empty blob is not present unless it actually exists"
);
assert!(!repo.objects.contains(&empty_blob_id));
assert!(
repo.find_header(empty_blob_id).is_err(),
"Empty blob doesn't exist automatically just like in Git"
);
assert_eq!(repo.objects.try_header(&empty_blob_id)?, None);
assert_eq!(repo.try_find_header(empty_blob_id)?, None);
assert!(repo.find_object(empty_blob_id).is_err());
assert!(repo.try_find_object(empty_blob_id)?.is_none());
let mut buf = Vec::new();
assert_eq!(repo.objects.try_find(&empty_blob_id, &mut buf)?, None);
Ok(())
}
mod tag {
@@ -648,7 +747,7 @@ mod commit {
fn multi_line_commit_message_uses_first_line_in_ref_log_ref_nonexisting() -> crate::Result {
let _env = freeze_time();
let (repo, _keep) = crate::repo_rw_opts("make_basic_repo.sh", restricted_and_git())?;
let parent = repo.find_reference("HEAD")?.peel_to_id_in_place()?;
let parent = repo.find_reference("HEAD")?.peel_to_id()?;
let empty_tree_id = parent.object()?.to_commit_ref_iter().tree_id().expect("tree to be set");
assert_eq!(
parent
@@ -697,7 +796,7 @@ mod commit {
);
let mut branch = repo.find_reference("new-branch")?;
let current_commit = branch.peel_to_id_in_place()?;
let current_commit = branch.peel_to_id()?;
assert_eq!(current_commit, second_commit_id, "the commit was set");
let mut log = branch.log_iter();
@@ -714,6 +813,68 @@ mod commit {
}
}
#[test]
fn new_commit_as() -> crate::Result {
let repo = empty_bare_in_memory_repo()?;
let empty_tree = repo.empty_tree();
let committer = gix::actor::Signature {
name: "c".into(),
email: "c@example.com".into(),
time: gix_date::parse_header("1 +0030").unwrap(),
};
let author = gix::actor::Signature {
name: "a".into(),
email: "a@example.com".into(),
time: gix_date::parse_header("3 +0100").unwrap(),
};
let commit = repo.new_commit_as(
committer.to_ref(&mut TimeBuf::default()),
author.to_ref(&mut TimeBuf::default()),
"message",
empty_tree.id,
gix::commit::NO_PARENT_IDS,
)?;
assert_eq!(
commit.id,
hex_to_id("b51277f2b2ea77676dd6fa877b5eb5ba2f7094d9"),
"The commit-id is stable as the author/committer is controlled"
);
let commit = commit.decode()?;
let mut buf = TimeBuf::default();
assert_eq!(commit.committer, committer.to_ref(&mut buf));
assert_eq!(commit.author, author.to_ref(&mut buf));
assert_eq!(commit.message, "message");
assert_eq!(commit.tree(), empty_tree.id);
assert_eq!(commit.parents.len(), 0);
assert!(repo.head()?.is_unborn(), "The head-ref wasn't touched");
Ok(())
}
#[test]
fn new_commit() -> crate::Result {
let mut repo = empty_bare_in_memory_repo()?;
let mut config = repo.config_snapshot_mut();
config.set_value(&gix::config::tree::User::NAME, "user")?;
config.set_value(&gix::config::tree::User::EMAIL, "user@example.com")?;
config.commit()?;
let empty_tree_id = repo.object_hash().empty_tree();
let commit = repo.new_commit("initial", empty_tree_id, gix::commit::NO_PARENT_IDS)?;
let commit = commit.decode()?;
assert_eq!(commit.message, "initial");
assert_eq!(commit.tree(), empty_tree_id);
assert_eq!(commit.parents.len(), 0);
assert!(repo.head()?.is_unborn(), "The head-ref wasn't touched");
Ok(())
}
fn empty_bare_in_memory_repo() -> crate::Result<gix::Repository> {
Ok(named_subrepo_opts("make_basic_repo.sh", "bare.git", gix::open::Options::isolated())?.with_object_memory())
}

View File

@@ -13,13 +13,17 @@ mod remote_at {
assert_eq!(remote.url(Direction::Fetch).unwrap().to_bstring(), fetch_url);
assert_eq!(remote.url(Direction::Push).unwrap().to_bstring(), fetch_url);
let mut remote = remote.push_url("user@host.xz:./relative")?;
let mut remote = remote.with_push_url("user@host.xz:./relative")?;
assert_eq!(
remote.url(Direction::Push).unwrap().to_bstring(),
"user@host.xz:./relative"
);
assert_eq!(remote.url(Direction::Fetch).unwrap().to_bstring(), fetch_url);
let new_fetch_url = "https://host.xz/byron/gitoxide";
remote = remote.with_url(new_fetch_url)?;
assert_eq!(remote.url(Direction::Fetch).unwrap().to_bstring(), new_fetch_url);
for (spec, direction) in [
("refs/heads/push", Direction::Push),
("refs/heads/fetch", Direction::Fetch),
@@ -57,9 +61,21 @@ mod remote_at {
"push is the same as fetch was rewritten"
);
let remote = remote.with_url("https://github.com/foobar/gitoxide")?;
assert_eq!(
remote.url(Direction::Fetch).unwrap().to_bstring(),
rewritten_fetch_url,
"fetch was rewritten"
);
assert_eq!(
remote.url(Direction::Push).unwrap().to_bstring(),
rewritten_fetch_url,
"push is the same as fetch was rewritten"
);
let remote = repo
.remote_at("https://github.com/foobar/gitoxide".to_owned())?
.push_url("file://dev/null".to_owned())?;
.with_push_url("file://dev/null".to_owned())?;
assert_eq!(remote.url(Direction::Fetch).unwrap().to_bstring(), rewritten_fetch_url);
assert_eq!(
remote.url(Direction::Push).unwrap().to_bstring(),
@@ -87,15 +103,40 @@ mod remote_at {
"push is the same as fetch was rewritten"
);
let remote = remote.with_url_without_url_rewrite("https://github.com/foobaz/gitoxide")?;
assert_eq!(
remote.url(Direction::Fetch).unwrap().to_bstring(),
"https://github.com/foobaz/gitoxide",
"fetch was rewritten"
);
assert_eq!(
remote.url(Direction::Push).unwrap().to_bstring(),
"https://github.com/foobaz/gitoxide",
"push is the same as fetch was rewritten"
);
let remote = repo
.remote_at_without_url_rewrite("https://github.com/foobar/gitoxide".to_owned())?
.push_url_without_url_rewrite("file://dev/null".to_owned())?;
.with_push_url_without_url_rewrite("file://dev/null".to_owned())?;
assert_eq!(remote.url(Direction::Fetch).unwrap().to_bstring(), fetch_url);
assert_eq!(
remote.url(Direction::Push).unwrap().to_bstring(),
"file://dev/null",
"push-url rewrite rules are not applied"
);
let remote = remote
.with_url_without_url_rewrite("https://github.com/foobaz/gitoxide".to_owned())?
.with_push_url_without_url_rewrite("file://dev/null".to_owned())?;
assert_eq!(
remote.url(Direction::Fetch).unwrap().to_bstring(),
"https://github.com/foobaz/gitoxide"
);
assert_eq!(
remote.url(Direction::Push).unwrap().to_bstring(),
"file://dev/null",
"push-url rewrite rules are not applied"
);
Ok(())
}
}

View File

@@ -16,19 +16,16 @@ pub struct Args {
#[derive(Debug, clap::Subcommand)]
pub enum Subcommands {
/// Generate a shell script that creates a git repository containing all commits that are
/// traversed when a blame is generated.
/// traversed when following a given file through the Git history just as `git blame` would.
///
/// This command extracts the files history so that blame, when run on the repository created
/// by the script, shows the same characteristics, in particular bugs, as the original, but in
/// a way that the original source file's content cannot be reconstructed.
/// a way that does not resemble the original source file's content to any greater extent than
/// is useful and necessary.
///
/// The idea is that by obfuscating the file's content we make it easier for people to share
/// the subset of data that's required for debugging purposes from repositories that are not
/// public.
///
/// Note that the obfuscation leaves certain properties of the source intact, so they can still
/// be inferred from the extracted history. Among these properties are directory structure
/// (though not the directories' names), renames, number of lines, and whitespace.
/// Note that this should not be used to redact sensitive information. The obfuscation leaves
/// numerous properties of the source intact, such that it may be feasible to reconstruct the
/// input.
///
/// This command can also be helpful in debugging the blame algorithm itself.
///
@@ -59,15 +56,27 @@ pub enum Subcommands {
file: std::ffi::OsString,
/// Do not use `copy-royal` to obfuscate the content of blobs, but copy it verbatim.
///
/// Note that this should only be done if the source history does not contain information
/// you're not willing to share.
/// Note that, for producing cases for the gitoxide test suite, we usually prefer only to
/// take blobs verbatim if the source repository was purely for testing.
#[clap(long)]
verbatim: bool,
},
/// Copy a tree so that it diffs the same but can't be traced back uniquely to its source.
/// Copy a tree so that it diffs the same but does not resemble the original files' content to
/// any greater extent than is useful and necessary.
///
/// The idea is that we don't want to deal with licensing, it's more about patterns in order to
/// reproduce cases for tests.
/// The idea is that this preserves the patterns that are usually sufficient to reproduce cases
/// for tests of diffs, both for making the tests work and for keeping the diffs understandable
/// to developers working on the tests, while avoiding keeping large verbatim fragments of code
/// based on which the test cases were created. The benefits of "reducing" the code to these
/// patterns include that the original meaning and function of code will not be confused with
/// the code of gitoxide itself, will not distract from the effects observed in their diffs,
/// and will not inadvertently be caught up in code cleanup efforts (e.g. attempting to adjust
/// style or fix bugs) that would make sense in code of gitoxide itself but that would subtly
/// break data test fixtures if done on their data.
///
/// Note that this should not be used to redact sensitive information. The obfuscation leaves
/// numerous properties of the source intact, such that it may be feasible to reconstruct the
/// input.
#[clap(visible_alias = "cr")]
CopyRoyal {
/// Don't really copy anything.
@@ -93,8 +102,8 @@ pub enum Subcommands {
count: usize,
/// Do not use `copy-royal` to degenerate information of blobs, but take blobs verbatim.
///
/// Note that this should only be done if the source repository is purely for testing
/// or was created by yourself.
/// Note that, for producing cases for the gitoxide test suite, we usually prefer only to
/// take blobs verbatim if the source repository was purely for testing.
#[clap(long)]
verbatim: bool,
/// The directory into which the blobs and tree declarations will be written.