Compare commits
	
		
			9 Commits
		
	
	
		
			v3.0.8
			...
			tiwarishub
		
	
	| Author | SHA1 | Date | |
|---|---|---|---|
|   | 20817ef617 | ||
|   | 103570a2bf | ||
|   | aeb01573e6 | ||
|   | d351e68b9a | ||
|   | 3d236ac88e | ||
|   | b8ddf3df10 | ||
|   | 0c5d98e6bb | ||
|   | 7c59aeb02d | ||
|   | c75dca6de7 | 
							
								
								
									
										1
									
								
								.github/auto_assign.yml
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										1
									
								
								.github/auto_assign.yml
									
									
									
									
										vendored
									
									
								
							| @@ -6,7 +6,6 @@ addAssignees: false | ||||
|  | ||||
| # A list of reviewers to be added to pull requests (GitHub user name) | ||||
| reviewers: | ||||
|   - phantsure | ||||
|   - kotewar | ||||
|   - aparna-ravindra | ||||
|   - tiwarishub | ||||
|   | ||||
							
								
								
									
										2
									
								
								.github/workflows/auto-assign-issues.yml
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										2
									
								
								.github/workflows/auto-assign-issues.yml
									
									
									
									
										vendored
									
									
								
							| @@ -11,5 +11,5 @@ jobs: | ||||
|             - name: 'Auto-assign issue' | ||||
|               uses: pozil/auto-assign-issue@v1.4.0 | ||||
|               with: | ||||
|                   assignees: phantsure,kotewar,tiwarishub,aparna-ravindra,vsvipul,bishal-pdmsft | ||||
|                   assignees: kotewar,tiwarishub,aparna-ravindra,vsvipul,bishal-pdmsft | ||||
|                   numOfAssignee: 1 | ||||
|   | ||||
							
								
								
									
										2
									
								
								.licenses/npm/@actions/cache.dep.yml
									
									
									
										generated
									
									
									
								
							
							
						
						
									
										2
									
								
								.licenses/npm/@actions/cache.dep.yml
									
									
									
										generated
									
									
									
								
							| @@ -1,6 +1,6 @@ | ||||
| --- | ||||
| name: "@actions/cache" | ||||
| version: 3.0.4 | ||||
| version: 3.0.0 | ||||
| type: npm | ||||
| summary:  | ||||
| homepage:  | ||||
|   | ||||
							
								
								
									
										14
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										14
									
								
								README.md
									
									
									
									
									
								
							| @@ -15,11 +15,6 @@ See ["Caching dependencies to speed up workflows"](https://help.github.com/githu | ||||
| * Updated the minimum runner version support from node 12 -> node 16. | ||||
| * Fixed avoiding empty cache save when no files are available for caching. | ||||
| * Fixed tar creation error while trying to create tar with path as `~/` home folder on `ubuntu-latest`. | ||||
| * Fixed zstd failing on amazon linux 2.0 runners. | ||||
| * Fixed cache not working with github workspace directory or current directory. | ||||
| * Fixed the download stuck problem by introducing a timeout of 1 hour for cache downloads. | ||||
| * Fix zstd not working for windows on gnu tar in issues. | ||||
| * Allowing users to provide a custom timeout as input for aborting download of a cache segment using an environment variable `SEGMENT_DOWNLOAD_TIMEOUT_MIN`. Default is 60 minutes. | ||||
|  | ||||
| Refer [here](https://github.com/actions/cache/blob/v2/README.md) for previous versions | ||||
|  | ||||
| @@ -37,9 +32,6 @@ If you are using this inside a container, a POSIX-compliant `tar` needs to be in | ||||
| * `restore-keys` - An ordered list of keys to use for restoring stale cache if no cache hit occurred for key. Note | ||||
| `cache-hit` returns false in this case. | ||||
|  | ||||
| #### Environment Variables | ||||
| * `SEGMENT_DOWNLOAD_TIMEOUT_MIN` - Segment download timeout (in minutes, default `60`) to abort download of the segment if not completed in the defined number of minutes. [Read more](#cache-segment-restore-timeout) | ||||
|  | ||||
| ### Outputs | ||||
|  | ||||
| * `cache-hit` - A boolean value to indicate an exact match was found for the key | ||||
| @@ -89,7 +81,6 @@ Every programming language and framework has its own way of caching. | ||||
| See [Examples](examples.md) for a list of `actions/cache` implementations for use with: | ||||
|  | ||||
| - [C# - NuGet](./examples.md#c---nuget) | ||||
| - [Clojure - Lein Deps](./examples.md#clojure---lein-deps) | ||||
| - [D - DUB](./examples.md#d---dub) | ||||
| - [Deno](./examples.md#deno) | ||||
| - [Elixir - Mix](./examples.md#elixir---mix) | ||||
| @@ -223,11 +214,6 @@ jobs: | ||||
|         if: steps.cache-primes.outputs.cache-hit != 'true' | ||||
|         run: ./generate-primes -d prime-numbers | ||||
| ``` | ||||
| ## Cache segment restore timeout | ||||
|  | ||||
| A cache gets downloaded in multiple segments of fixed sizes (`1GB` for a `32-bit` runner and `2GB` for a `64-bit` runner). Sometimes, a segment download gets stuck which causes the workflow job to be stuck forever and fail. Version `v3.0.8` of `actions/cache` introduces a segment download timeout. The segment download timeout will allow the segment download to get aborted and hence allow the job to proceed with a cache miss. | ||||
|  | ||||
| Default value of this timeout is 60 minutes and can be customized by specifying an [environment variable](https://docs.github.com/en/actions/learn-github-actions/environment-variables) named `SEGMENT_DOWNLOAD_TIMEOUT_MINS` with timeout value in minutes. | ||||
|  | ||||
| ## Contributing | ||||
| We would love for you to contribute to `actions/cache`, pull requests are welcome! Please see the [CONTRIBUTING.md](CONTRIBUTING.md) for more information. | ||||
|   | ||||
							
								
								
									
										14
									
								
								RELEASES.md
									
									
									
									
									
								
							
							
						
						
									
										14
									
								
								RELEASES.md
									
									
									
									
									
								
							| @@ -15,17 +15,3 @@ | ||||
|  | ||||
| ### 3.0.4 | ||||
| - Fixed tar creation error while trying to create tar with path as `~/` home folder on `ubuntu-latest`. ([issue](https://github.com/actions/cache/issues/689)) | ||||
|  | ||||
| ### 3.0.5 | ||||
| - Removed error handling by consuming actions/cache 3.0 toolkit, Now cache server error handling will be done by toolkit. ([PR](https://github.com/actions/cache/pull/834)) | ||||
|  | ||||
| ### 3.0.6 | ||||
| - Fixed [#809](https://github.com/actions/cache/issues/809) - zstd -d: no such file or directory error | ||||
| - Fixed [#833](https://github.com/actions/cache/issues/833) - cache doesn't work with github workspace directory | ||||
|  | ||||
| ### 3.0.7 | ||||
| - Fixed [#810](https://github.com/actions/cache/issues/810) - download stuck issue. A new timeout is introduced in the download process to abort the download if it gets stuck and doesn't finish within an hour. | ||||
|  | ||||
| ### 3.0.8 | ||||
| - Fix zstd not working for windows on gnu tar in issues [#888](https://github.com/actions/cache/issues/888) and [#891](https://github.com/actions/cache/issues/891). | ||||
| - Allowing users to provide a custom timeout as input for aborting download of a cache segment using an environment variable `SEGMENT_DOWNLOAD_TIMEOUT_MIN`. Default is 60 minutes. | ||||
							
								
								
									
										124
									
								
								dist/restore/index.js
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										124
									
								
								dist/restore/index.js
									
									
									
									
										vendored
									
									
								
							| @@ -1113,15 +1113,9 @@ function resolvePaths(patterns) { | ||||
|                     .replace(new RegExp(`\\${path.sep}`, 'g'), '/'); | ||||
|                 core.debug(`Matched: ${relativeFile}`); | ||||
|                 // Paths are made relative so the tar entries are all relative to the root of the workspace.
 | ||||
|                 if (relativeFile === '') { | ||||
|                     // path.relative returns empty string if workspace and file are equal
 | ||||
|                     paths.push('.'); | ||||
|                 } | ||||
|                 else { | ||||
|                 paths.push(`${relativeFile}`); | ||||
|             } | ||||
|         } | ||||
|         } | ||||
|         catch (e_1_1) { e_1 = { error: e_1_1 }; } | ||||
|         finally { | ||||
|             try { | ||||
| @@ -5473,7 +5467,6 @@ const util = __importStar(__webpack_require__(669)); | ||||
| const utils = __importStar(__webpack_require__(15)); | ||||
| const constants_1 = __webpack_require__(931); | ||||
| const requestUtils_1 = __webpack_require__(899); | ||||
| const abort_controller_1 = __webpack_require__(106); | ||||
| /** | ||||
|  * Pipes the body of a HTTP response to a stream | ||||
|  * | ||||
| @@ -5657,26 +5650,17 @@ function downloadCacheStorageSDK(archiveLocation, archivePath, options) { | ||||
|             const fd = fs.openSync(archivePath, 'w'); | ||||
|             try { | ||||
|                 downloadProgress.startDisplayTimer(); | ||||
|                 const controller = new abort_controller_1.AbortController(); | ||||
|                 const abortSignal = controller.signal; | ||||
|                 while (!downloadProgress.isDone()) { | ||||
|                     const segmentStart = downloadProgress.segmentOffset + downloadProgress.segmentSize; | ||||
|                     const segmentSize = Math.min(maxSegmentSize, contentLength - segmentStart); | ||||
|                     downloadProgress.nextSegment(segmentSize); | ||||
|                     const result = yield promiseWithTimeout(options.segmentTimeoutInMs || 3600000, client.downloadToBuffer(segmentStart, segmentSize, { | ||||
|                         abortSignal, | ||||
|                     const result = yield client.downloadToBuffer(segmentStart, segmentSize, { | ||||
|                         concurrency: options.downloadConcurrency, | ||||
|                         onProgress: downloadProgress.onProgress() | ||||
|                     })); | ||||
|                     if (result === 'timeout') { | ||||
|                         controller.abort(); | ||||
|                         throw new Error('Aborting cache download as the download time exceeded the timeout.'); | ||||
|                     } | ||||
|                     else if (Buffer.isBuffer(result)) { | ||||
|                     }); | ||||
|                     fs.writeFileSync(fd, result); | ||||
|                 } | ||||
|             } | ||||
|             } | ||||
|             finally { | ||||
|                 downloadProgress.stopDisplayTimer(); | ||||
|                 fs.closeSync(fd); | ||||
| @@ -5685,16 +5669,6 @@ function downloadCacheStorageSDK(archiveLocation, archivePath, options) { | ||||
|     }); | ||||
| } | ||||
| exports.downloadCacheStorageSDK = downloadCacheStorageSDK; | ||||
| const promiseWithTimeout = (timeoutMs, promise) => __awaiter(void 0, void 0, void 0, function* () { | ||||
|     let timeoutHandle; | ||||
|     const timeoutPromise = new Promise(resolve => { | ||||
|         timeoutHandle = setTimeout(() => resolve('timeout'), timeoutMs); | ||||
|     }); | ||||
|     return Promise.race([promise, timeoutPromise]).then(result => { | ||||
|         clearTimeout(timeoutHandle); | ||||
|         return result; | ||||
|     }); | ||||
| }); | ||||
| //# sourceMappingURL=downloadUtils.js.map
 | ||||
| 
 | ||||
| /***/ }), | ||||
| @@ -37240,7 +37214,6 @@ const fs_1 = __webpack_require__(747); | ||||
| const path = __importStar(__webpack_require__(622)); | ||||
| const utils = __importStar(__webpack_require__(15)); | ||||
| const constants_1 = __webpack_require__(931); | ||||
| const IS_WINDOWS = process.platform === 'win32'; | ||||
| function getTarPath(args, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         switch (process.platform) { | ||||
| @@ -37288,43 +37261,26 @@ function getWorkingDirectory() { | ||||
|     var _a; | ||||
|     return (_a = process.env['GITHUB_WORKSPACE']) !== null && _a !== void 0 ? _a : process.cwd(); | ||||
| } | ||||
| // Common function for extractTar and listTar to get the compression method
 | ||||
| function getCompressionProgram(compressionMethod) { | ||||
|     // -d: Decompress.
 | ||||
|     // unzstd is equivalent to 'zstd -d'
 | ||||
|     // --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|     // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|     switch (compressionMethod) { | ||||
|         case constants_1.CompressionMethod.Zstd: | ||||
|             return [ | ||||
|                 '--use-compress-program', | ||||
|                 IS_WINDOWS ? 'zstd -d --long=30' : 'unzstd --long=30' | ||||
|             ]; | ||||
|         case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|             return ['--use-compress-program', IS_WINDOWS ? 'zstd -d' : 'unzstd']; | ||||
|         default: | ||||
|             return ['-z']; | ||||
|     } | ||||
| } | ||||
| function listTar(archivePath, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         const args = [ | ||||
|             ...getCompressionProgram(compressionMethod), | ||||
|             '-tf', | ||||
|             archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), | ||||
|             '-P' | ||||
|         ]; | ||||
|         yield execTar(args, compressionMethod); | ||||
|     }); | ||||
| } | ||||
| exports.listTar = listTar; | ||||
| function extractTar(archivePath, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         // Create directory to extract tar into
 | ||||
|         const workingDirectory = getWorkingDirectory(); | ||||
|         yield io.mkdirP(workingDirectory); | ||||
|         // --d: Decompress.
 | ||||
|         // --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|         // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|         function getCompressionProgram() { | ||||
|             switch (compressionMethod) { | ||||
|                 case constants_1.CompressionMethod.Zstd: | ||||
|                     return ['--use-compress-program', 'zstd -d --long=30']; | ||||
|                 case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|                     return ['--use-compress-program', 'zstd -d']; | ||||
|                 default: | ||||
|                     return ['-z']; | ||||
|             } | ||||
|         } | ||||
|         const args = [ | ||||
|             ...getCompressionProgram(compressionMethod), | ||||
|             ...getCompressionProgram(), | ||||
|             '-xf', | ||||
|             archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), | ||||
|             '-P', | ||||
| @@ -37343,19 +37299,15 @@ function createTar(archiveFolder, sourceDirectories, compressionMethod) { | ||||
|         fs_1.writeFileSync(path.join(archiveFolder, manifestFilename), sourceDirectories.join('\n')); | ||||
|         const workingDirectory = getWorkingDirectory(); | ||||
|         // -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
 | ||||
|         // zstdmt is equivalent to 'zstd -T0'
 | ||||
|         // --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|         // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|         // Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
 | ||||
|         function getCompressionProgram() { | ||||
|             switch (compressionMethod) { | ||||
|                 case constants_1.CompressionMethod.Zstd: | ||||
|                     return [ | ||||
|                         '--use-compress-program', | ||||
|                         IS_WINDOWS ? 'zstd -T0 --long=30' : 'zstdmt --long=30' | ||||
|                     ]; | ||||
|                     return ['--use-compress-program', 'zstd -T0 --long=30']; | ||||
|                 case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|                     return ['--use-compress-program', IS_WINDOWS ? 'zstd -T0' : 'zstdmt']; | ||||
|                     return ['--use-compress-program', 'zstd -T0']; | ||||
|                 default: | ||||
|                     return ['-z']; | ||||
|             } | ||||
| @@ -37377,6 +37329,32 @@ function createTar(archiveFolder, sourceDirectories, compressionMethod) { | ||||
|     }); | ||||
| } | ||||
| exports.createTar = createTar; | ||||
| function listTar(archivePath, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         // --d: Decompress.
 | ||||
|         // --long=#: Enables long distance matching with # bits.
 | ||||
|         // Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|         // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|         function getCompressionProgram() { | ||||
|             switch (compressionMethod) { | ||||
|                 case constants_1.CompressionMethod.Zstd: | ||||
|                     return ['--use-compress-program', 'zstd -d --long=30']; | ||||
|                 case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|                     return ['--use-compress-program', 'zstd -d']; | ||||
|                 default: | ||||
|                     return ['-z']; | ||||
|             } | ||||
|         } | ||||
|         const args = [ | ||||
|             ...getCompressionProgram(), | ||||
|             '-tf', | ||||
|             archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), | ||||
|             '-P' | ||||
|         ]; | ||||
|         yield execTar(args, compressionMethod); | ||||
|     }); | ||||
| } | ||||
| exports.listTar = listTar; | ||||
| //# sourceMappingURL=tar.js.map
 | ||||
| 
 | ||||
| /***/ }), | ||||
| @@ -40811,8 +40789,7 @@ function getDownloadOptions(copy) { | ||||
|     const result = { | ||||
|         useAzureSdk: true, | ||||
|         downloadConcurrency: 8, | ||||
|         timeoutInMs: 30000, | ||||
|         segmentTimeoutInMs: 3600000 | ||||
|         timeoutInMs: 30000 | ||||
|     }; | ||||
|     if (copy) { | ||||
|         if (typeof copy.useAzureSdk === 'boolean') { | ||||
| @@ -40824,21 +40801,10 @@ function getDownloadOptions(copy) { | ||||
|         if (typeof copy.timeoutInMs === 'number') { | ||||
|             result.timeoutInMs = copy.timeoutInMs; | ||||
|         } | ||||
|         if (typeof copy.segmentTimeoutInMs === 'number') { | ||||
|             result.segmentTimeoutInMs = copy.segmentTimeoutInMs; | ||||
|         } | ||||
|     } | ||||
|     const segmentDownloadTimeoutMins = process.env['SEGMENT_DOWNLOAD_TIMEOUT_MINS']; | ||||
|     if (segmentDownloadTimeoutMins && | ||||
|         !isNaN(Number(segmentDownloadTimeoutMins)) && | ||||
|         isFinite(Number(segmentDownloadTimeoutMins))) { | ||||
|         result.segmentTimeoutInMs = Number(segmentDownloadTimeoutMins) * 60 * 1000; | ||||
|     } | ||||
|     core.debug(`Use Azure SDK: ${result.useAzureSdk}`); | ||||
|     core.debug(`Download concurrency: ${result.downloadConcurrency}`); | ||||
|     core.debug(`Request timeout (ms): ${result.timeoutInMs}`); | ||||
|     core.debug(`Cache segment download timeout mins env var: ${process.env['SEGMENT_DOWNLOAD_TIMEOUT_MINS']}`); | ||||
|     core.debug(`Segment download timeout (ms): ${result.segmentTimeoutInMs}`); | ||||
|     return result; | ||||
| } | ||||
| exports.getDownloadOptions = getDownloadOptions; | ||||
|   | ||||
							
								
								
									
										124
									
								
								dist/save/index.js
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										124
									
								
								dist/save/index.js
									
									
									
									
										vendored
									
									
								
							| @@ -1113,15 +1113,9 @@ function resolvePaths(patterns) { | ||||
|                     .replace(new RegExp(`\\${path.sep}`, 'g'), '/'); | ||||
|                 core.debug(`Matched: ${relativeFile}`); | ||||
|                 // Paths are made relative so the tar entries are all relative to the root of the workspace.
 | ||||
|                 if (relativeFile === '') { | ||||
|                     // path.relative returns empty string if workspace and file are equal
 | ||||
|                     paths.push('.'); | ||||
|                 } | ||||
|                 else { | ||||
|                 paths.push(`${relativeFile}`); | ||||
|             } | ||||
|         } | ||||
|         } | ||||
|         catch (e_1_1) { e_1 = { error: e_1_1 }; } | ||||
|         finally { | ||||
|             try { | ||||
| @@ -5473,7 +5467,6 @@ const util = __importStar(__webpack_require__(669)); | ||||
| const utils = __importStar(__webpack_require__(15)); | ||||
| const constants_1 = __webpack_require__(931); | ||||
| const requestUtils_1 = __webpack_require__(899); | ||||
| const abort_controller_1 = __webpack_require__(106); | ||||
| /** | ||||
|  * Pipes the body of a HTTP response to a stream | ||||
|  * | ||||
| @@ -5657,26 +5650,17 @@ function downloadCacheStorageSDK(archiveLocation, archivePath, options) { | ||||
|             const fd = fs.openSync(archivePath, 'w'); | ||||
|             try { | ||||
|                 downloadProgress.startDisplayTimer(); | ||||
|                 const controller = new abort_controller_1.AbortController(); | ||||
|                 const abortSignal = controller.signal; | ||||
|                 while (!downloadProgress.isDone()) { | ||||
|                     const segmentStart = downloadProgress.segmentOffset + downloadProgress.segmentSize; | ||||
|                     const segmentSize = Math.min(maxSegmentSize, contentLength - segmentStart); | ||||
|                     downloadProgress.nextSegment(segmentSize); | ||||
|                     const result = yield promiseWithTimeout(options.segmentTimeoutInMs || 3600000, client.downloadToBuffer(segmentStart, segmentSize, { | ||||
|                         abortSignal, | ||||
|                     const result = yield client.downloadToBuffer(segmentStart, segmentSize, { | ||||
|                         concurrency: options.downloadConcurrency, | ||||
|                         onProgress: downloadProgress.onProgress() | ||||
|                     })); | ||||
|                     if (result === 'timeout') { | ||||
|                         controller.abort(); | ||||
|                         throw new Error('Aborting cache download as the download time exceeded the timeout.'); | ||||
|                     } | ||||
|                     else if (Buffer.isBuffer(result)) { | ||||
|                     }); | ||||
|                     fs.writeFileSync(fd, result); | ||||
|                 } | ||||
|             } | ||||
|             } | ||||
|             finally { | ||||
|                 downloadProgress.stopDisplayTimer(); | ||||
|                 fs.closeSync(fd); | ||||
| @@ -5685,16 +5669,6 @@ function downloadCacheStorageSDK(archiveLocation, archivePath, options) { | ||||
|     }); | ||||
| } | ||||
| exports.downloadCacheStorageSDK = downloadCacheStorageSDK; | ||||
| const promiseWithTimeout = (timeoutMs, promise) => __awaiter(void 0, void 0, void 0, function* () { | ||||
|     let timeoutHandle; | ||||
|     const timeoutPromise = new Promise(resolve => { | ||||
|         timeoutHandle = setTimeout(() => resolve('timeout'), timeoutMs); | ||||
|     }); | ||||
|     return Promise.race([promise, timeoutPromise]).then(result => { | ||||
|         clearTimeout(timeoutHandle); | ||||
|         return result; | ||||
|     }); | ||||
| }); | ||||
| //# sourceMappingURL=downloadUtils.js.map
 | ||||
| 
 | ||||
| /***/ }), | ||||
| @@ -37240,7 +37214,6 @@ const fs_1 = __webpack_require__(747); | ||||
| const path = __importStar(__webpack_require__(622)); | ||||
| const utils = __importStar(__webpack_require__(15)); | ||||
| const constants_1 = __webpack_require__(931); | ||||
| const IS_WINDOWS = process.platform === 'win32'; | ||||
| function getTarPath(args, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         switch (process.platform) { | ||||
| @@ -37288,43 +37261,26 @@ function getWorkingDirectory() { | ||||
|     var _a; | ||||
|     return (_a = process.env['GITHUB_WORKSPACE']) !== null && _a !== void 0 ? _a : process.cwd(); | ||||
| } | ||||
| // Common function for extractTar and listTar to get the compression method
 | ||||
| function getCompressionProgram(compressionMethod) { | ||||
|     // -d: Decompress.
 | ||||
|     // unzstd is equivalent to 'zstd -d'
 | ||||
|     // --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|     // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|     switch (compressionMethod) { | ||||
|         case constants_1.CompressionMethod.Zstd: | ||||
|             return [ | ||||
|                 '--use-compress-program', | ||||
|                 IS_WINDOWS ? 'zstd -d --long=30' : 'unzstd --long=30' | ||||
|             ]; | ||||
|         case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|             return ['--use-compress-program', IS_WINDOWS ? 'zstd -d' : 'unzstd']; | ||||
|         default: | ||||
|             return ['-z']; | ||||
|     } | ||||
| } | ||||
| function listTar(archivePath, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         const args = [ | ||||
|             ...getCompressionProgram(compressionMethod), | ||||
|             '-tf', | ||||
|             archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), | ||||
|             '-P' | ||||
|         ]; | ||||
|         yield execTar(args, compressionMethod); | ||||
|     }); | ||||
| } | ||||
| exports.listTar = listTar; | ||||
| function extractTar(archivePath, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         // Create directory to extract tar into
 | ||||
|         const workingDirectory = getWorkingDirectory(); | ||||
|         yield io.mkdirP(workingDirectory); | ||||
|         // --d: Decompress.
 | ||||
|         // --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|         // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|         function getCompressionProgram() { | ||||
|             switch (compressionMethod) { | ||||
|                 case constants_1.CompressionMethod.Zstd: | ||||
|                     return ['--use-compress-program', 'zstd -d --long=30']; | ||||
|                 case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|                     return ['--use-compress-program', 'zstd -d']; | ||||
|                 default: | ||||
|                     return ['-z']; | ||||
|             } | ||||
|         } | ||||
|         const args = [ | ||||
|             ...getCompressionProgram(compressionMethod), | ||||
|             ...getCompressionProgram(), | ||||
|             '-xf', | ||||
|             archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), | ||||
|             '-P', | ||||
| @@ -37343,19 +37299,15 @@ function createTar(archiveFolder, sourceDirectories, compressionMethod) { | ||||
|         fs_1.writeFileSync(path.join(archiveFolder, manifestFilename), sourceDirectories.join('\n')); | ||||
|         const workingDirectory = getWorkingDirectory(); | ||||
|         // -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
 | ||||
|         // zstdmt is equivalent to 'zstd -T0'
 | ||||
|         // --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|         // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|         // Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
 | ||||
|         function getCompressionProgram() { | ||||
|             switch (compressionMethod) { | ||||
|                 case constants_1.CompressionMethod.Zstd: | ||||
|                     return [ | ||||
|                         '--use-compress-program', | ||||
|                         IS_WINDOWS ? 'zstd -T0 --long=30' : 'zstdmt --long=30' | ||||
|                     ]; | ||||
|                     return ['--use-compress-program', 'zstd -T0 --long=30']; | ||||
|                 case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|                     return ['--use-compress-program', IS_WINDOWS ? 'zstd -T0' : 'zstdmt']; | ||||
|                     return ['--use-compress-program', 'zstd -T0']; | ||||
|                 default: | ||||
|                     return ['-z']; | ||||
|             } | ||||
| @@ -37377,6 +37329,32 @@ function createTar(archiveFolder, sourceDirectories, compressionMethod) { | ||||
|     }); | ||||
| } | ||||
| exports.createTar = createTar; | ||||
| function listTar(archivePath, compressionMethod) { | ||||
|     return __awaiter(this, void 0, void 0, function* () { | ||||
|         // --d: Decompress.
 | ||||
|         // --long=#: Enables long distance matching with # bits.
 | ||||
|         // Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
 | ||||
|         // Using 30 here because we also support 32-bit self-hosted runners.
 | ||||
|         function getCompressionProgram() { | ||||
|             switch (compressionMethod) { | ||||
|                 case constants_1.CompressionMethod.Zstd: | ||||
|                     return ['--use-compress-program', 'zstd -d --long=30']; | ||||
|                 case constants_1.CompressionMethod.ZstdWithoutLong: | ||||
|                     return ['--use-compress-program', 'zstd -d']; | ||||
|                 default: | ||||
|                     return ['-z']; | ||||
|             } | ||||
|         } | ||||
|         const args = [ | ||||
|             ...getCompressionProgram(), | ||||
|             '-tf', | ||||
|             archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), | ||||
|             '-P' | ||||
|         ]; | ||||
|         yield execTar(args, compressionMethod); | ||||
|     }); | ||||
| } | ||||
| exports.listTar = listTar; | ||||
| //# sourceMappingURL=tar.js.map
 | ||||
| 
 | ||||
| /***/ }), | ||||
| @@ -40811,8 +40789,7 @@ function getDownloadOptions(copy) { | ||||
|     const result = { | ||||
|         useAzureSdk: true, | ||||
|         downloadConcurrency: 8, | ||||
|         timeoutInMs: 30000, | ||||
|         segmentTimeoutInMs: 3600000 | ||||
|         timeoutInMs: 30000 | ||||
|     }; | ||||
|     if (copy) { | ||||
|         if (typeof copy.useAzureSdk === 'boolean') { | ||||
| @@ -40824,21 +40801,10 @@ function getDownloadOptions(copy) { | ||||
|         if (typeof copy.timeoutInMs === 'number') { | ||||
|             result.timeoutInMs = copy.timeoutInMs; | ||||
|         } | ||||
|         if (typeof copy.segmentTimeoutInMs === 'number') { | ||||
|             result.segmentTimeoutInMs = copy.segmentTimeoutInMs; | ||||
|         } | ||||
|     } | ||||
|     const segmentDownloadTimeoutMins = process.env['SEGMENT_DOWNLOAD_TIMEOUT_MINS']; | ||||
|     if (segmentDownloadTimeoutMins && | ||||
|         !isNaN(Number(segmentDownloadTimeoutMins)) && | ||||
|         isFinite(Number(segmentDownloadTimeoutMins))) { | ||||
|         result.segmentTimeoutInMs = Number(segmentDownloadTimeoutMins) * 60 * 1000; | ||||
|     } | ||||
|     core.debug(`Use Azure SDK: ${result.useAzureSdk}`); | ||||
|     core.debug(`Download concurrency: ${result.downloadConcurrency}`); | ||||
|     core.debug(`Request timeout (ms): ${result.timeoutInMs}`); | ||||
|     core.debug(`Cache segment download timeout mins env var: ${process.env['SEGMENT_DOWNLOAD_TIMEOUT_MINS']}`); | ||||
|     core.debug(`Segment download timeout (ms): ${result.segmentTimeoutInMs}`); | ||||
|     return result; | ||||
| } | ||||
| exports.getDownloadOptions = getDownloadOptions; | ||||
|   | ||||
							
								
								
									
										14
									
								
								examples.md
									
									
									
									
									
								
							
							
						
						
									
										14
									
								
								examples.md
									
									
									
									
									
								
							| @@ -1,7 +1,6 @@ | ||||
| # Examples | ||||
|  | ||||
| - [C# - NuGet](#c---nuget) | ||||
| - [Clojure - Lein Deps](#clojure---lein-deps) | ||||
| - [D - DUB](#d---dub) | ||||
|   - [POSIX](#posix) | ||||
|   - [Windows](#windows) | ||||
| @@ -81,19 +80,6 @@ steps: | ||||
|         ${{ runner.os }}-nuget- | ||||
| ``` | ||||
|  | ||||
| ## Clojure - Lein Deps | ||||
|  | ||||
| ```yaml | ||||
| - name: Cache lein project dependencies | ||||
|   uses: actions/cache@v3 | ||||
|   with: | ||||
|     path: ~/.m2/repository | ||||
|     key: ${{ runner.os }}-clojure-${{ hashFiles('**/project.clj') }} | ||||
|     restore-keys: | | ||||
|       ${{ runner.os }}-clojure | ||||
| ``` | ||||
|  | ||||
|  | ||||
| ## D - DUB | ||||
|  | ||||
| ### POSIX | ||||
|   | ||||
							
								
								
									
										18
									
								
								package-lock.json
									
									
									
										generated
									
									
									
								
							
							
						
						
									
										18
									
								
								package-lock.json
									
									
									
										generated
									
									
									
								
							| @@ -1,15 +1,15 @@ | ||||
| { | ||||
|   "name": "cache", | ||||
|   "version": "3.0.8", | ||||
|   "version": "3.0.4", | ||||
|   "lockfileVersion": 2, | ||||
|   "requires": true, | ||||
|   "packages": { | ||||
|     "": { | ||||
|       "name": "cache", | ||||
|       "version": "3.0.8", | ||||
|       "version": "3.0.4", | ||||
|       "license": "MIT", | ||||
|       "dependencies": { | ||||
|         "@actions/cache": "^3.0.4", | ||||
|         "@actions/cache": "^3.0.0", | ||||
|         "@actions/core": "^1.7.0", | ||||
|         "@actions/exec": "^1.1.1", | ||||
|         "@actions/io": "^1.1.2" | ||||
| @@ -36,9 +36,9 @@ | ||||
|       } | ||||
|     }, | ||||
|     "node_modules/@actions/cache": { | ||||
|       "version": "3.0.4", | ||||
|       "resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.0.4.tgz", | ||||
|       "integrity": "sha512-9RwVL8/ISJoYWFNH1wR/C26E+M3HDkGPWmbFJMMCKwTkjbNZJreMT4XaR/EB1bheIvN4PREQxEQQVJ18IPnf/Q==", | ||||
|       "version": "3.0.0", | ||||
|       "resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.0.0.tgz", | ||||
|       "integrity": "sha512-GL9CT1Fnu+pqs8TTB621q8Xa8Cilw2n9MwvbgMedetH7L1q2n6jY61gzbwGbKgtVbp3gVJ12aNMi4osSGXx3KQ==", | ||||
|       "dependencies": { | ||||
|         "@actions/core": "^1.2.6", | ||||
|         "@actions/exec": "^1.0.1", | ||||
| @@ -9533,9 +9533,9 @@ | ||||
|   }, | ||||
|   "dependencies": { | ||||
|     "@actions/cache": { | ||||
|       "version": "3.0.4", | ||||
|       "resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.0.4.tgz", | ||||
|       "integrity": "sha512-9RwVL8/ISJoYWFNH1wR/C26E+M3HDkGPWmbFJMMCKwTkjbNZJreMT4XaR/EB1bheIvN4PREQxEQQVJ18IPnf/Q==", | ||||
|       "version": "3.0.0", | ||||
|       "resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.0.0.tgz", | ||||
|       "integrity": "sha512-GL9CT1Fnu+pqs8TTB621q8Xa8Cilw2n9MwvbgMedetH7L1q2n6jY61gzbwGbKgtVbp3gVJ12aNMi4osSGXx3KQ==", | ||||
|       "requires": { | ||||
|         "@actions/core": "^1.2.6", | ||||
|         "@actions/exec": "^1.0.1", | ||||
|   | ||||
| @@ -1,6 +1,6 @@ | ||||
| { | ||||
|   "name": "cache", | ||||
|   "version": "3.0.8", | ||||
|   "version": "3.0.4", | ||||
|   "private": true, | ||||
|   "description": "Cache dependencies and build outputs", | ||||
|   "main": "dist/restore/index.js", | ||||
| @@ -23,7 +23,7 @@ | ||||
|   "author": "GitHub", | ||||
|   "license": "MIT", | ||||
|   "dependencies": { | ||||
|     "@actions/cache": "^3.0.4", | ||||
|     "@actions/cache": "^3.0.0", | ||||
|     "@actions/core": "^1.7.0", | ||||
|     "@actions/exec": "^1.1.1", | ||||
|     "@actions/io": "^1.1.2" | ||||
|   | ||||
		Reference in New Issue
	
	Block a user