Age | Commit message (Collapse) | Author |
|
* extend the diriv cache to 100 entries
* add special handling for the immutable root diriv
The better cache allows to shed some complexity from the path
encryption logic (parent-of-parent check).
Mitigates https://github.com/rfjakob/gocryptfs/issues/127
|
|
Dir is like filepath.Dir but returns "" instead of ".".
This was already implemented in fusefrontend_reverse as saneDir().
We will need it in nametransform for the improved diriv caching.
|
|
Needs some space to grow.
renamed: internal/nametransform/diriv_cache.go -> internal/nametransform/dirivcache/dirivcache.go
|
|
This operation has been done three time by identical
sections of code. Create a function for it.
|
|
As noticed by @riking, the logic in the bash script will break
when Go 1 version numbers reach double-digits.
Instead, use a build tag "!go1.5" to cause a syntax error:
$ /opt/go1.4.3/bin/go build
can't load package: package github.com/rfjakob/gocryptfs:
go1.4.go:5:1: expected 'package', found 'STRING' "You need Go 1.5 or higher to compile gocryptfs!"
Fixes https://github.com/rfjakob/gocryptfs/issues/133
|
|
...and move all profiling functionality to its own file, as
the main function is already long enough.
Periodically saving the memory profile allows capturing the used
memory during normal operation, as opposed to on exit, where the
kernel has already issued FORGETs for all inodes.
This functionality has been used to create the memory profile shown
in https://github.com/rfjakob/gocryptfs/issues/132 .
|
|
scrypt (used during masterkey decryption) allocates a lot of memory.
Go only returns memory to the OS after 5 minutes, which looks like
a waste. Call FreeOSMemory() to return it immediately.
Looking a fresh mount:
before: VmRSS: 73556 kB
after: VmRSS: 8568 kB
|
|
A directory with a long name has two associated virtual files:
the .name file and the .diriv files.
These used to get the same inode number:
$ ls -di1 * */*
33313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw
1000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw/gocryptfs.diriv
1000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw.name
With this change we use another prefix (2 instead of 1) for .name files.
$ ls -di1 * */*
33313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw
1000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw/gocryptfs.diriv
2000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw.name
|
|
This was working until DecryptName switched to returning
EBADMSG instead of EINVAL.
Add a test to catch the regression next time.
|
|
We passed our stdout and stderr to the new logger instance,
which makes sense to see any error message, but also means that
the fd is kept open even when we close it.
Fixes the new TestMountBackground test and
https://github.com/rfjakob/gocryptfs/issues/130 .
|
|
Currently fails, as reported at
https://github.com/rfjakob/gocryptfs/issues/130 .
|
|
This really is a part of daemonization.
No code changes.
|
|
I have added a subset of fsstress-gocryptfs.bash to EncFS as
fsstress-encfs.sh, improving the code a bit.
This change forward-ports these improvements to
fsstress-gocryptfs.bash.
|
|
On MacOS, building and testing without openssl is much easier.
The tests should skip tests that fail because of missing openssl
instead of aborting.
Fixes https://github.com/rfjakob/gocryptfs/issues/123
|
|
Fixed by including the correct header. Should work on older openssl
versions as well.
Error was:
locking.go:21: undefined reference to `CRYPTO_set_locking_callback'
|
|
Due to RMW, we always need read permissions on the backing file. This is a
problem if the file permissions do not allow reading (i.e. 0200 permissions).
This patch works around that problem by chmod'ing the file, obtaining a fd,
and chmod'ing it back.
Test included.
Issue reported at: https://github.com/rfjakob/gocryptfs/issues/125
|
|
Currently neither gocryptfs nor go-fuse automatically call load_osxfuse
if the /dev/osxfuse* device(s) do not exist. At least tell the user
what to do.
See https://github.com/rfjakob/gocryptfs/issues/124 for user pain.
|
|
If I use gocryptfs cypher plain then the resulting volume
should be named 'plain' just as it would be on Linux.
|
|
Saves 3% for the tar extract benchmark because we skip the allocation.
|
|
Previously we ran through the decryption steps even for an empty
ciphertext slice. The functions handle it correctly, but returning
early skips all the extra calls.
Speeds up the tar extract benchmark by about 4%.
|
|
|
|
Extracts the linux-3.0.tar.gz tarball while capturing memory
and cpu profiles.
|
|
|
|
go-fuse caps MaxWrite at MAX_KERNEL_WRITE anyway, and we
actually depend on this behavoir now as the byte pools
are sized according to MAX_KERNEL_WRITE.
So let's use MAX_KERNEL_WRITE explicitely.
|
|
Massive speed boost for streaming reads.
|
|
Adds a test for the optimization introduced in:
stupidgcm: Open: if "dst" is big enough, use it as the output buffer
|
|
This gets us a massive speed boost in streaming reads.
|
|
This means we won't need any allocation for the plaintext.
|
|
Easily saves lots of allocations.
|
|
This will allow us to return internal buffers to a pool.
|
|
|
|
bPool verifies the lengths of slices going in and out.
Also, add a plaintext block pool - pBlockPool - and use
it for decryption.
|
|
|
|
|
|
This saves an allocation of the ciphertext block.
|
|
|
|
Reads 1GB of zeros while collecting memory and cpu profiles.
|
|
|
|
We use two levels of buffers:
1) 4kiB+overhead for each ciphertext block
2) 128kiB+overhead for each FUSE write (32 ciphertext blocks)
This commit adds a sync.Pool for both levels.
The memory-efficiency for small writes could be improved,
as we now always use a 128kiB buffer.
|
|
|
|
|
|
go-fuse recently added a git tag - let's use it.
|
|
Writes 1GB of zeros to a gocryptfs mount while collecting
cpu and memory profiles.
|
|
Dup2 is not implemented on linux/arm64.
Fixes https://github.com/rfjakob/gocryptfs/issues/121 .
Also adds cross-compilation to CI.
|
|
|
|
128kiB = 32 x 4kiB pages is the maximum we get from the kernel. Splitting
up smaller writes is probably not worth it.
Parallelism is limited to two for now.
|
|
Slight streaming write improvement.
|
|
Spawn a worker goroutine that reads the next 512-byte block
while the current one is being drained.
This should help reduce waiting times when /dev/urandom is very
slow (like on Linux 3.16 kernels).
|
|
Allows for quickly testing the streaming write throughput.
|
|
Also, get rid of the half-empty line.
|