curl --output file
curl -o file
wget -O file
Best to just always use curl, and know it uses lowercase for common arguments like normal.
It's usually already installed as well.
A piggy bank of commands, fixes, succinct reviews, some mini articles and technical opinions from a (mostly) Perl developer.
export HAR_FILE="/path/to/har/file"
Example to dump responses, for a given request URI
REQUEST_URI="https://www.facebook.com/api/graphql/" cat $HAR_FILE | jq -r ".log.entries[] | if .request.url | test(\"$REQUEST_URI\") then .response.content else empty end"
jq
is in double quotes "
so that the $REQUEST_URI
is interpolatedtest("foo")
, therefore they must be escaped like test(\"foo\")
Another way to do the same thing in bash using single quotes. Quotes can be tricky.
REQUEST_URI="https://www.facebook.com/api/graphql/" cat $HAR_FILE | jq -r '.log.entries[] | if .request.url | test("'$REQUEST_URI'") then { uri: .request.url, mineType: .response.content.mimeType, content: .response.content.text | .[0:200] } else empty end'
jq
is in three parts:
'...etc...test("'
$REQUEST_URI
'") then...etc...else empty end'
Dump full the response content, interpreted as JSON
...todo...
...todo
Goal: Extract URLs of all your playlists
(under development)
cat $HAR_FILE | jq -r '.log.entries[] | select( .request.url | test("^https://music.youtube.com/youtubei") ) | .response.content.text' > $REQS_FILE
cat $REQS_FILE | while read line; do echo "$line" | jq '.contents.singleColumnBrowseResultsRenderer.tabs[].tabRenderer.content.sectionListRenderer.contents[].musicCarouselShelfRenderer.contents[].musicTwoRowItemRenderer.title.runs[] | { name: .text, id: .navigationEndpoint.browseEndpoint.browseId }'; done > $PLAYLISTS_FILE
cat playlists.2 | jq -r 'getpath( paths | select(.[-1] == "browseId") ) | select(. | match("^VLPL"))'
cat $REQS_FILE | perl -lne'@ids = m/"browseId":"([^"]+)"/g; print $_ foreach map { s/^VL//; $_ } grep { /^VLPL/ && length($_) > 22 } @ids' | uniq > $PLAYLISTS_FILE
cat $HAR_FILE | jq -r '.log.entries[] | select( .request.url | test("^https://music.youtube.com/library/playlists") ) | .response.content.text' > $SCRIPT_DATA
cat $SCRIPT_DATA | perl -plne's/(\\x[[:xdigit:]]{2})/qq{"$1"}/eeg' > $DECODED_SCRIPT_DATA
Goal: Extract list of alternative software
Fetch JSON
Extract data
export REGEX="software/gmail.json"; cat alternativeto.net.har | jq -r ".log.entries[] | if .request.url | test(\"$REGEX\") then .response.content.text else empty end" > page_per_line
[]
above to [0]
to get one page, and pipe the result through jq
again or use the fromjson
filter as follows:
export REGEX="software/gmail.json"; cat alternativeto.net.har | jq -r ".log.entries[0] | if .request.url | test(\"$REGEX\") then .response.content.text | fromjson else empty end" > one_page_one_line
export REGEX="software/gmail.json"; cat alternativeto.net.har | jq -r ".log.entries[] | if .request.url | test(\"$REGEX\") then .response.content.text | fromjson | .pageProps.items[] | { name: .name, cost: .licenseCost, model: .licenseModel, desc: .shortDescriptionOrTagLine } else empty end" > software.json
Sample output
{
"name": "Mailfence",
"cost": "Freemium",
"model": "Proprietary",
"desc": "Mailfence is a secure and private email service that fights for online privacy and digital freedom."
}
{
"name": "Proton Mail",
"cost": "Freemium",
"model": "Open Source",
"desc": "Secure email with absolutely no compromises, brought to you by MIT and CERN scientists."
}
...etc
e.g. converts AT&T Webmail
to AT&T Webmail
npm install -g he
cat software.json | jq '.name' -r | he --decode
For very simple test examples, you must quote inputs twice, i.e. pass "foo"
with quotes
echo '"hello"' | jq '.'
Regex. gsub = global substitution. Note the semicolon ;
to separate arguments to gsub()
.
echo '"foo\r\nbar"' | jq -r 'gsub("(\r\n.+)"; "")'