1 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
2 |
DENIED
|
moderate
|
App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
} |
|
Show voter details
|
3 |
DENIED
|
edit
|
App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
} |
|
Show voter details
|
4 |
DENIED
|
moderate
|
App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
} |
|
Show voter details
|
5 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
6 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4016
+user: App\Entity\User {#3964 …}
+entry: App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: null
+root: null
+body: """
I just spent a bit too much time making this (it was fun), so don’t even tell me if you’re not going to use it.\n
\n
You can open up a desired book’s page, start this first script in the console, and then scroll through the book:\n
\n
```\n
\n
<span style="color:#323232;">let imgs = new Set();\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">function cheese() { \n
</span><span style="color:#323232;"> for(let img of document.getElementsByTagName("img")) {\n
</span><span style="color:#323232;"> if(img.parentElement.parentElement.className == "pageImageDisplay") imgs.add(img.attributes["src"].value);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">setInterval(cheese, 5);\n
</span>\n
```\n
\n
And once you’re done you may run this script to download each image:\n
\n
```\n
\n
<span style="color:#323232;">function toDataURL(url) {\n
</span><span style="color:#323232;"> return fetch(url).then((response) => {\n
</span><span style="color:#323232;"> return response.blob();\n
</span><span style="color:#323232;"> }).then(blob => {\n
</span><span style="color:#323232;"> return URL.createObjectURL(blob);\n
</span><span style="color:#323232;"> });\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">async function asd() {\n
</span><span style="color:#323232;"> for(let img of imgs) {\n
</span><span style="color:#323232;"> const a = document.createElement("a");\n
</span><span style="color:#323232;"> a.href = await toDataURL(img);\n
</span><span style="color:#323232;"> let name;\n
</span><span style="color:#323232;"> for(let thing of img.split("&")) {\n
</span><span style="color:#323232;"> if(thing.startsWith("pg=")) {\n
</span><span style="color:#323232;"> name = thing.split("=")[1];\n
</span><span style="color:#323232;"> console.log(name);\n
</span><span style="color:#323232;"> break;\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> a.download = name;\n
</span><span style="color:#323232;"> document.body.appendChild(a);\n
</span><span style="color:#323232;"> a.click();\n
</span><span style="color:#323232;"> document.body.removeChild(a);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">asd();\n
</span>\n
```\n
\n
Alternatively you may simply run something like this to get the links:\n
\n
```\n
\n
<span style="color:#323232;">for(let img of imgs) {\n
</span><span style="color:#323232;">\tconsole.log(img)\n
</span><span style="color:#323232;">}\n
</span>\n
```\n
\n
There’s stuff you can tweak of course if it don’t quite work for you. Worked fine on me tests.\n
\n
If you notice a page missing, you should be able to just scroll back to it and then download again to get everything. The first script just keeps collecting pages till you refresh the site. Which also means you should refresh once you are done downloading, as it eats CPU for breakfast.\n
\n
Oh and ***NEVER RUN ANY JAVASCRIPT CODE SOMEONE ON THE INTERNET TELLS YOU TO RUN***
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1729502222 {#4025
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+tags: [
"323232"
]
+mentions: [
"@antonim@lemmy.dbzer0.com"
]
+children: Doctrine\ORM\PersistentCollection {#4014 …}
+nested: Doctrine\ORM\PersistentCollection {#4012 …}
+votes: Doctrine\ORM\PersistentCollection {#4010 …}
+reports: Doctrine\ORM\PersistentCollection {#4008 …}
+favourites: Doctrine\ORM\PersistentCollection {#3976 …}
+notifications: Doctrine\ORM\PersistentCollection {#3980 …}
-id: 30017
-bodyTs: "'1':129 '5':71 'a.click':137 'a.download':133 'a.href':113 'abl':190 'also':217 'altern':141 'asd':103,140 'async':101 'await':114 'back':194 'bit':5 'blob':97,100 'book':34,49 'break':132 'breakfast':232 'chees':55,70 'code':239 'collect':209 'consol':43 'console.log':130,158 'const':109 'cours':167 'cpu':230 'desir':33 'document.body.appendchild':135 'document.body.removechild':138 'document.createelement':111 'document.getelementsbytagname':60 'done':76,225 'download':83,199,226 'eat':229 'even':17 'everyth':203 'fetch':90 'fine':177 'first':39,205 'fun':13 'function':54,86,102 'get':150,202 'go':24 'imag':85 'img':51,58,61,106,108,116,155,157,159 'img.attributes':66 'img.parentelement.parentelement.classname':63 'img.split':123 'imgs.add':65 'internet':243 'javascript':238 'keep':208 'let':50,57,105,117,120,154 'like':147 'link':152 'make':9 'may':78,143 'mean':218 'miss':186 'much':7 'name':118,127,131,134 'never':235 'new':52 'notic':183 'oh':233 'open':30 'page':36,185,210 'pageimagedisplay':64 'pg':126 'quit':172 're':22,75 'refresh':213,221 'respons':93 'response.blob':95 'return':89,94,98 'run':79,145,236,247 'script':40,81,206 'scroll':46,193 'set':53 'setinterv':69 'simpli':144 'site':215 'someon':240 'someth':146 'spent':3 'src':67 'start':37 'stuff':162 'tell':18,244 'test':180 'thing':121 'thing.split':128 'thing.startswith':125 'till':211 'time':8 'todataurl':87,115 'tweak':165 'url':88,91 'url.createobjecturl':99 'use':26 'valu':68 'work':173,176"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/3445560"
+editedAt: null
+createdAt: DateTimeImmutable @1694484327 {#3961
date: 2023-09-12 04:05:27.0 +02:00
}
} |
|
Show voter details
|
7 |
DENIED
|
edit
|
App\Entity\EntryComment {#4016
+user: App\Entity\User {#3964 …}
+entry: App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: null
+root: null
+body: """
I just spent a bit too much time making this (it was fun), so don’t even tell me if you’re not going to use it.\n
\n
You can open up a desired book’s page, start this first script in the console, and then scroll through the book:\n
\n
```\n
\n
<span style="color:#323232;">let imgs = new Set();\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">function cheese() { \n
</span><span style="color:#323232;"> for(let img of document.getElementsByTagName("img")) {\n
</span><span style="color:#323232;"> if(img.parentElement.parentElement.className == "pageImageDisplay") imgs.add(img.attributes["src"].value);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">setInterval(cheese, 5);\n
</span>\n
```\n
\n
And once you’re done you may run this script to download each image:\n
\n
```\n
\n
<span style="color:#323232;">function toDataURL(url) {\n
</span><span style="color:#323232;"> return fetch(url).then((response) => {\n
</span><span style="color:#323232;"> return response.blob();\n
</span><span style="color:#323232;"> }).then(blob => {\n
</span><span style="color:#323232;"> return URL.createObjectURL(blob);\n
</span><span style="color:#323232;"> });\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">async function asd() {\n
</span><span style="color:#323232;"> for(let img of imgs) {\n
</span><span style="color:#323232;"> const a = document.createElement("a");\n
</span><span style="color:#323232;"> a.href = await toDataURL(img);\n
</span><span style="color:#323232;"> let name;\n
</span><span style="color:#323232;"> for(let thing of img.split("&")) {\n
</span><span style="color:#323232;"> if(thing.startsWith("pg=")) {\n
</span><span style="color:#323232;"> name = thing.split("=")[1];\n
</span><span style="color:#323232;"> console.log(name);\n
</span><span style="color:#323232;"> break;\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> a.download = name;\n
</span><span style="color:#323232;"> document.body.appendChild(a);\n
</span><span style="color:#323232;"> a.click();\n
</span><span style="color:#323232;"> document.body.removeChild(a);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">asd();\n
</span>\n
```\n
\n
Alternatively you may simply run something like this to get the links:\n
\n
```\n
\n
<span style="color:#323232;">for(let img of imgs) {\n
</span><span style="color:#323232;">\tconsole.log(img)\n
</span><span style="color:#323232;">}\n
</span>\n
```\n
\n
There’s stuff you can tweak of course if it don’t quite work for you. Worked fine on me tests.\n
\n
If you notice a page missing, you should be able to just scroll back to it and then download again to get everything. The first script just keeps collecting pages till you refresh the site. Which also means you should refresh once you are done downloading, as it eats CPU for breakfast.\n
\n
Oh and ***NEVER RUN ANY JAVASCRIPT CODE SOMEONE ON THE INTERNET TELLS YOU TO RUN***
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1729502222 {#4025
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+tags: [
"323232"
]
+mentions: [
"@antonim@lemmy.dbzer0.com"
]
+children: Doctrine\ORM\PersistentCollection {#4014 …}
+nested: Doctrine\ORM\PersistentCollection {#4012 …}
+votes: Doctrine\ORM\PersistentCollection {#4010 …}
+reports: Doctrine\ORM\PersistentCollection {#4008 …}
+favourites: Doctrine\ORM\PersistentCollection {#3976 …}
+notifications: Doctrine\ORM\PersistentCollection {#3980 …}
-id: 30017
-bodyTs: "'1':129 '5':71 'a.click':137 'a.download':133 'a.href':113 'abl':190 'also':217 'altern':141 'asd':103,140 'async':101 'await':114 'back':194 'bit':5 'blob':97,100 'book':34,49 'break':132 'breakfast':232 'chees':55,70 'code':239 'collect':209 'consol':43 'console.log':130,158 'const':109 'cours':167 'cpu':230 'desir':33 'document.body.appendchild':135 'document.body.removechild':138 'document.createelement':111 'document.getelementsbytagname':60 'done':76,225 'download':83,199,226 'eat':229 'even':17 'everyth':203 'fetch':90 'fine':177 'first':39,205 'fun':13 'function':54,86,102 'get':150,202 'go':24 'imag':85 'img':51,58,61,106,108,116,155,157,159 'img.attributes':66 'img.parentelement.parentelement.classname':63 'img.split':123 'imgs.add':65 'internet':243 'javascript':238 'keep':208 'let':50,57,105,117,120,154 'like':147 'link':152 'make':9 'may':78,143 'mean':218 'miss':186 'much':7 'name':118,127,131,134 'never':235 'new':52 'notic':183 'oh':233 'open':30 'page':36,185,210 'pageimagedisplay':64 'pg':126 'quit':172 're':22,75 'refresh':213,221 'respons':93 'response.blob':95 'return':89,94,98 'run':79,145,236,247 'script':40,81,206 'scroll':46,193 'set':53 'setinterv':69 'simpli':144 'site':215 'someon':240 'someth':146 'spent':3 'src':67 'start':37 'stuff':162 'tell':18,244 'test':180 'thing':121 'thing.split':128 'thing.startswith':125 'till':211 'time':8 'todataurl':87,115 'tweak':165 'url':88,91 'url.createobjecturl':99 'use':26 'valu':68 'work':173,176"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/3445560"
+editedAt: null
+createdAt: DateTimeImmutable @1694484327 {#3961
date: 2023-09-12 04:05:27.0 +02:00
}
} |
|
Show voter details
|
8 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4016
+user: App\Entity\User {#3964 …}
+entry: App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: null
+root: null
+body: """
I just spent a bit too much time making this (it was fun), so don’t even tell me if you’re not going to use it.\n
\n
You can open up a desired book’s page, start this first script in the console, and then scroll through the book:\n
\n
```\n
\n
<span style="color:#323232;">let imgs = new Set();\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">function cheese() { \n
</span><span style="color:#323232;"> for(let img of document.getElementsByTagName("img")) {\n
</span><span style="color:#323232;"> if(img.parentElement.parentElement.className == "pageImageDisplay") imgs.add(img.attributes["src"].value);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">setInterval(cheese, 5);\n
</span>\n
```\n
\n
And once you’re done you may run this script to download each image:\n
\n
```\n
\n
<span style="color:#323232;">function toDataURL(url) {\n
</span><span style="color:#323232;"> return fetch(url).then((response) => {\n
</span><span style="color:#323232;"> return response.blob();\n
</span><span style="color:#323232;"> }).then(blob => {\n
</span><span style="color:#323232;"> return URL.createObjectURL(blob);\n
</span><span style="color:#323232;"> });\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">async function asd() {\n
</span><span style="color:#323232;"> for(let img of imgs) {\n
</span><span style="color:#323232;"> const a = document.createElement("a");\n
</span><span style="color:#323232;"> a.href = await toDataURL(img);\n
</span><span style="color:#323232;"> let name;\n
</span><span style="color:#323232;"> for(let thing of img.split("&")) {\n
</span><span style="color:#323232;"> if(thing.startsWith("pg=")) {\n
</span><span style="color:#323232;"> name = thing.split("=")[1];\n
</span><span style="color:#323232;"> console.log(name);\n
</span><span style="color:#323232;"> break;\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> a.download = name;\n
</span><span style="color:#323232;"> document.body.appendChild(a);\n
</span><span style="color:#323232;"> a.click();\n
</span><span style="color:#323232;"> document.body.removeChild(a);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">asd();\n
</span>\n
```\n
\n
Alternatively you may simply run something like this to get the links:\n
\n
```\n
\n
<span style="color:#323232;">for(let img of imgs) {\n
</span><span style="color:#323232;">\tconsole.log(img)\n
</span><span style="color:#323232;">}\n
</span>\n
```\n
\n
There’s stuff you can tweak of course if it don’t quite work for you. Worked fine on me tests.\n
\n
If you notice a page missing, you should be able to just scroll back to it and then download again to get everything. The first script just keeps collecting pages till you refresh the site. Which also means you should refresh once you are done downloading, as it eats CPU for breakfast.\n
\n
Oh and ***NEVER RUN ANY JAVASCRIPT CODE SOMEONE ON THE INTERNET TELLS YOU TO RUN***
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1729502222 {#4025
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+tags: [
"323232"
]
+mentions: [
"@antonim@lemmy.dbzer0.com"
]
+children: Doctrine\ORM\PersistentCollection {#4014 …}
+nested: Doctrine\ORM\PersistentCollection {#4012 …}
+votes: Doctrine\ORM\PersistentCollection {#4010 …}
+reports: Doctrine\ORM\PersistentCollection {#4008 …}
+favourites: Doctrine\ORM\PersistentCollection {#3976 …}
+notifications: Doctrine\ORM\PersistentCollection {#3980 …}
-id: 30017
-bodyTs: "'1':129 '5':71 'a.click':137 'a.download':133 'a.href':113 'abl':190 'also':217 'altern':141 'asd':103,140 'async':101 'await':114 'back':194 'bit':5 'blob':97,100 'book':34,49 'break':132 'breakfast':232 'chees':55,70 'code':239 'collect':209 'consol':43 'console.log':130,158 'const':109 'cours':167 'cpu':230 'desir':33 'document.body.appendchild':135 'document.body.removechild':138 'document.createelement':111 'document.getelementsbytagname':60 'done':76,225 'download':83,199,226 'eat':229 'even':17 'everyth':203 'fetch':90 'fine':177 'first':39,205 'fun':13 'function':54,86,102 'get':150,202 'go':24 'imag':85 'img':51,58,61,106,108,116,155,157,159 'img.attributes':66 'img.parentelement.parentelement.classname':63 'img.split':123 'imgs.add':65 'internet':243 'javascript':238 'keep':208 'let':50,57,105,117,120,154 'like':147 'link':152 'make':9 'may':78,143 'mean':218 'miss':186 'much':7 'name':118,127,131,134 'never':235 'new':52 'notic':183 'oh':233 'open':30 'page':36,185,210 'pageimagedisplay':64 'pg':126 'quit':172 're':22,75 'refresh':213,221 'respons':93 'response.blob':95 'return':89,94,98 'run':79,145,236,247 'script':40,81,206 'scroll':46,193 'set':53 'setinterv':69 'simpli':144 'site':215 'someon':240 'someth':146 'spent':3 'src':67 'start':37 'stuff':162 'tell':18,244 'test':180 'thing':121 'thing.split':128 'thing.startswith':125 'till':211 'time':8 'todataurl':87,115 'tweak':165 'url':88,91 'url.createobjecturl':99 'use':26 'valu':68 'work':173,176"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/3445560"
+editedAt: null
+createdAt: DateTimeImmutable @1694484327 {#3961
date: 2023-09-12 04:05:27.0 +02:00
}
} |
|
Show voter details
|
9 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
10 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4034
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+entry: App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: App\Entity\EntryComment {#4016
+user: App\Entity\User {#3964 …}
+entry: App\Entity\Entry {#1831}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: null
+root: null
+body: """
I just spent a bit too much time making this (it was fun), so don’t even tell me if you’re not going to use it.\n
\n
You can open up a desired book’s page, start this first script in the console, and then scroll through the book:\n
\n
```\n
\n
<span style="color:#323232;">let imgs = new Set();\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">function cheese() { \n
</span><span style="color:#323232;"> for(let img of document.getElementsByTagName("img")) {\n
</span><span style="color:#323232;"> if(img.parentElement.parentElement.className == "pageImageDisplay") imgs.add(img.attributes["src"].value);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">setInterval(cheese, 5);\n
</span>\n
```\n
\n
And once you’re done you may run this script to download each image:\n
\n
```\n
\n
<span style="color:#323232;">function toDataURL(url) {\n
</span><span style="color:#323232;"> return fetch(url).then((response) => {\n
</span><span style="color:#323232;"> return response.blob();\n
</span><span style="color:#323232;"> }).then(blob => {\n
</span><span style="color:#323232;"> return URL.createObjectURL(blob);\n
</span><span style="color:#323232;"> });\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">async function asd() {\n
</span><span style="color:#323232;"> for(let img of imgs) {\n
</span><span style="color:#323232;"> const a = document.createElement("a");\n
</span><span style="color:#323232;"> a.href = await toDataURL(img);\n
</span><span style="color:#323232;"> let name;\n
</span><span style="color:#323232;"> for(let thing of img.split("&")) {\n
</span><span style="color:#323232;"> if(thing.startsWith("pg=")) {\n
</span><span style="color:#323232;"> name = thing.split("=")[1];\n
</span><span style="color:#323232;"> console.log(name);\n
</span><span style="color:#323232;"> break;\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> a.download = name;\n
</span><span style="color:#323232;"> document.body.appendChild(a);\n
</span><span style="color:#323232;"> a.click();\n
</span><span style="color:#323232;"> document.body.removeChild(a);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">asd();\n
</span>\n
```\n
\n
Alternatively you may simply run something like this to get the links:\n
\n
```\n
\n
<span style="color:#323232;">for(let img of imgs) {\n
</span><span style="color:#323232;">\tconsole.log(img)\n
</span><span style="color:#323232;">}\n
</span>\n
```\n
\n
There’s stuff you can tweak of course if it don’t quite work for you. Worked fine on me tests.\n
\n
If you notice a page missing, you should be able to just scroll back to it and then download again to get everything. The first script just keeps collecting pages till you refresh the site. Which also means you should refresh once you are done downloading, as it eats CPU for breakfast.\n
\n
Oh and ***NEVER RUN ANY JAVASCRIPT CODE SOMEONE ON THE INTERNET TELLS YOU TO RUN***
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1729502222 {#4025
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+tags: [
"323232"
]
+mentions: [
"@antonim@lemmy.dbzer0.com"
]
+children: Doctrine\ORM\PersistentCollection {#4014 …}
+nested: Doctrine\ORM\PersistentCollection {#4012 …}
+votes: Doctrine\ORM\PersistentCollection {#4010 …}
+reports: Doctrine\ORM\PersistentCollection {#4008 …}
+favourites: Doctrine\ORM\PersistentCollection {#3976 …}
+notifications: Doctrine\ORM\PersistentCollection {#3980 …}
-id: 30017
-bodyTs: "'1':129 '5':71 'a.click':137 'a.download':133 'a.href':113 'abl':190 'also':217 'altern':141 'asd':103,140 'async':101 'await':114 'back':194 'bit':5 'blob':97,100 'book':34,49 'break':132 'breakfast':232 'chees':55,70 'code':239 'collect':209 'consol':43 'console.log':130,158 'const':109 'cours':167 'cpu':230 'desir':33 'document.body.appendchild':135 'document.body.removechild':138 'document.createelement':111 'document.getelementsbytagname':60 'done':76,225 'download':83,199,226 'eat':229 'even':17 'everyth':203 'fetch':90 'fine':177 'first':39,205 'fun':13 'function':54,86,102 'get':150,202 'go':24 'imag':85 'img':51,58,61,106,108,116,155,157,159 'img.attributes':66 'img.parentelement.parentelement.classname':63 'img.split':123 'imgs.add':65 'internet':243 'javascript':238 'keep':208 'let':50,57,105,117,120,154 'like':147 'link':152 'make':9 'may':78,143 'mean':218 'miss':186 'much':7 'name':118,127,131,134 'never':235 'new':52 'notic':183 'oh':233 'open':30 'page':36,185,210 'pageimagedisplay':64 'pg':126 'quit':172 're':22,75 'refresh':213,221 'respons':93 'response.blob':95 'return':89,94,98 'run':79,145,236,247 'script':40,81,206 'scroll':46,193 'set':53 'setinterv':69 'simpli':144 'site':215 'someon':240 'someth':146 'spent':3 'src':67 'start':37 'stuff':162 'tell':18,244 'test':180 'thing':121 'thing.split':128 'thing.startswith':125 'till':211 'time':8 'todataurl':87,115 'tweak':165 'url':88,91 'url.createobjecturl':99 'use':26 'valu':68 'work':173,176"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/3445560"
+editedAt: null
+createdAt: DateTimeImmutable @1694484327 {#3961
date: 2023-09-12 04:05:27.0 +02:00
}
}
+root: App\Entity\EntryComment {#4016}
+body: """
Well, I may be technologically semi-literate and I may have felt a bit dizzy when I saw actual code in your comment, but I sure as hell will find a way to put it to use, no matter the cost.\n
\n
You’re terrific, man. No idea what else to say.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1694533159 {#4039
date: 2023-09-12 17:39:19.0 +02:00
}
+ip: null
+tags: null
+mentions: [
"@antonim@lemmy.dbzer0.com"
"@bela@lemm.ee"
]
+children: Doctrine\ORM\PersistentCollection {#4044 …}
+nested: Doctrine\ORM\PersistentCollection {#4043 …}
+votes: Doctrine\ORM\PersistentCollection {#4052 …}
+reports: Doctrine\ORM\PersistentCollection {#4033 …}
+favourites: Doctrine\ORM\PersistentCollection {#4031 …}
+notifications: Doctrine\ORM\PersistentCollection {#4029 …}
-id: 351202
-bodyTs: "'actual':20 'bit':15 'code':21 'comment':24 'cost':42 'dizzi':16 'els':50 'felt':13 'find':31 'hell':29 'idea':48 'liter':8 'man':46 'matter':40 'may':3,11 'put':35 're':44 'saw':19 'say':52 'semi':7 'semi-liter':6 'sure':27 'technolog':5 'terrif':45 'use':38 'way':33 'well':1"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/comment/2975607"
+editedAt: null
+createdAt: DateTimeImmutable @1694533159 {#4036
date: 2023-09-12 17:39:19.0 +02:00
}
} |
|
Show voter details
|
11 |
DENIED
|
edit
|
App\Entity\EntryComment {#4034
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+entry: App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: App\Entity\EntryComment {#4016
+user: App\Entity\User {#3964 …}
+entry: App\Entity\Entry {#1831}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: null
+root: null
+body: """
I just spent a bit too much time making this (it was fun), so don’t even tell me if you’re not going to use it.\n
\n
You can open up a desired book’s page, start this first script in the console, and then scroll through the book:\n
\n
```\n
\n
<span style="color:#323232;">let imgs = new Set();\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">function cheese() { \n
</span><span style="color:#323232;"> for(let img of document.getElementsByTagName("img")) {\n
</span><span style="color:#323232;"> if(img.parentElement.parentElement.className == "pageImageDisplay") imgs.add(img.attributes["src"].value);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">setInterval(cheese, 5);\n
</span>\n
```\n
\n
And once you’re done you may run this script to download each image:\n
\n
```\n
\n
<span style="color:#323232;">function toDataURL(url) {\n
</span><span style="color:#323232;"> return fetch(url).then((response) => {\n
</span><span style="color:#323232;"> return response.blob();\n
</span><span style="color:#323232;"> }).then(blob => {\n
</span><span style="color:#323232;"> return URL.createObjectURL(blob);\n
</span><span style="color:#323232;"> });\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">async function asd() {\n
</span><span style="color:#323232;"> for(let img of imgs) {\n
</span><span style="color:#323232;"> const a = document.createElement("a");\n
</span><span style="color:#323232;"> a.href = await toDataURL(img);\n
</span><span style="color:#323232;"> let name;\n
</span><span style="color:#323232;"> for(let thing of img.split("&")) {\n
</span><span style="color:#323232;"> if(thing.startsWith("pg=")) {\n
</span><span style="color:#323232;"> name = thing.split("=")[1];\n
</span><span style="color:#323232;"> console.log(name);\n
</span><span style="color:#323232;"> break;\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> a.download = name;\n
</span><span style="color:#323232;"> document.body.appendChild(a);\n
</span><span style="color:#323232;"> a.click();\n
</span><span style="color:#323232;"> document.body.removeChild(a);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">asd();\n
</span>\n
```\n
\n
Alternatively you may simply run something like this to get the links:\n
\n
```\n
\n
<span style="color:#323232;">for(let img of imgs) {\n
</span><span style="color:#323232;">\tconsole.log(img)\n
</span><span style="color:#323232;">}\n
</span>\n
```\n
\n
There’s stuff you can tweak of course if it don’t quite work for you. Worked fine on me tests.\n
\n
If you notice a page missing, you should be able to just scroll back to it and then download again to get everything. The first script just keeps collecting pages till you refresh the site. Which also means you should refresh once you are done downloading, as it eats CPU for breakfast.\n
\n
Oh and ***NEVER RUN ANY JAVASCRIPT CODE SOMEONE ON THE INTERNET TELLS YOU TO RUN***
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1729502222 {#4025
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+tags: [
"323232"
]
+mentions: [
"@antonim@lemmy.dbzer0.com"
]
+children: Doctrine\ORM\PersistentCollection {#4014 …}
+nested: Doctrine\ORM\PersistentCollection {#4012 …}
+votes: Doctrine\ORM\PersistentCollection {#4010 …}
+reports: Doctrine\ORM\PersistentCollection {#4008 …}
+favourites: Doctrine\ORM\PersistentCollection {#3976 …}
+notifications: Doctrine\ORM\PersistentCollection {#3980 …}
-id: 30017
-bodyTs: "'1':129 '5':71 'a.click':137 'a.download':133 'a.href':113 'abl':190 'also':217 'altern':141 'asd':103,140 'async':101 'await':114 'back':194 'bit':5 'blob':97,100 'book':34,49 'break':132 'breakfast':232 'chees':55,70 'code':239 'collect':209 'consol':43 'console.log':130,158 'const':109 'cours':167 'cpu':230 'desir':33 'document.body.appendchild':135 'document.body.removechild':138 'document.createelement':111 'document.getelementsbytagname':60 'done':76,225 'download':83,199,226 'eat':229 'even':17 'everyth':203 'fetch':90 'fine':177 'first':39,205 'fun':13 'function':54,86,102 'get':150,202 'go':24 'imag':85 'img':51,58,61,106,108,116,155,157,159 'img.attributes':66 'img.parentelement.parentelement.classname':63 'img.split':123 'imgs.add':65 'internet':243 'javascript':238 'keep':208 'let':50,57,105,117,120,154 'like':147 'link':152 'make':9 'may':78,143 'mean':218 'miss':186 'much':7 'name':118,127,131,134 'never':235 'new':52 'notic':183 'oh':233 'open':30 'page':36,185,210 'pageimagedisplay':64 'pg':126 'quit':172 're':22,75 'refresh':213,221 'respons':93 'response.blob':95 'return':89,94,98 'run':79,145,236,247 'script':40,81,206 'scroll':46,193 'set':53 'setinterv':69 'simpli':144 'site':215 'someon':240 'someth':146 'spent':3 'src':67 'start':37 'stuff':162 'tell':18,244 'test':180 'thing':121 'thing.split':128 'thing.startswith':125 'till':211 'time':8 'todataurl':87,115 'tweak':165 'url':88,91 'url.createobjecturl':99 'use':26 'valu':68 'work':173,176"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/3445560"
+editedAt: null
+createdAt: DateTimeImmutable @1694484327 {#3961
date: 2023-09-12 04:05:27.0 +02:00
}
}
+root: App\Entity\EntryComment {#4016}
+body: """
Well, I may be technologically semi-literate and I may have felt a bit dizzy when I saw actual code in your comment, but I sure as hell will find a way to put it to use, no matter the cost.\n
\n
You’re terrific, man. No idea what else to say.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1694533159 {#4039
date: 2023-09-12 17:39:19.0 +02:00
}
+ip: null
+tags: null
+mentions: [
"@antonim@lemmy.dbzer0.com"
"@bela@lemm.ee"
]
+children: Doctrine\ORM\PersistentCollection {#4044 …}
+nested: Doctrine\ORM\PersistentCollection {#4043 …}
+votes: Doctrine\ORM\PersistentCollection {#4052 …}
+reports: Doctrine\ORM\PersistentCollection {#4033 …}
+favourites: Doctrine\ORM\PersistentCollection {#4031 …}
+notifications: Doctrine\ORM\PersistentCollection {#4029 …}
-id: 351202
-bodyTs: "'actual':20 'bit':15 'code':21 'comment':24 'cost':42 'dizzi':16 'els':50 'felt':13 'find':31 'hell':29 'idea':48 'liter':8 'man':46 'matter':40 'may':3,11 'put':35 're':44 'saw':19 'say':52 'semi':7 'semi-liter':6 'sure':27 'technolog':5 'terrif':45 'use':38 'way':33 'well':1"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/comment/2975607"
+editedAt: null
+createdAt: DateTimeImmutable @1694533159 {#4036
date: 2023-09-12 17:39:19.0 +02:00
}
} |
|
Show voter details
|
12 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4034
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+entry: App\Entity\Entry {#1831
+user: Proxies\__CG__\App\Entity\User {#1960 …}
+magazine: App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1876 …}
+slug: "Google-Books-colour-images"
+title: "Google Books - colour images"
+url: null
+body: """
Google Books allows viewing the scans in colour, but when I click the option to download the PDF, I am provided only with a black-and-white version.\n
\n
Is it known how to obtain the original colour images, outside of *inspectelementing* each page one by one?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 2
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1729502222 {#1808
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1916 …}
+votes: Doctrine\ORM\PersistentCollection {#1974 …}
+reports: Doctrine\ORM\PersistentCollection {#1837 …}
+favourites: Doctrine\ORM\PersistentCollection {#1936 …}
+notifications: Doctrine\ORM\PersistentCollection {#2429 …}
+badges: Doctrine\ORM\PersistentCollection {#2424 …}
+children: []
-id: 2367
-titleTs: "'book':2 'colour':3 'googl':1 'imag':4"
-bodyTs: "'allow':3 'black':26 'black-and-whit':25 'book':2 'click':12 'colour':8,38 'download':16 'googl':1 'imag':39 'inspectel':42 'known':32 'obtain':35 'one':45,47 'option':14 'origin':37 'outsid':40 'page':44 'pdf':18 'provid':21 'scan':6 'version':29 'view':4 'white':28"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1694413852
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/post/3811915"
+editedAt: null
+createdAt: DateTimeImmutable @1694400852 {#2406
date: 2023-09-11 04:54:12.0 +02:00
}
}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: App\Entity\EntryComment {#4016
+user: App\Entity\User {#3964 …}
+entry: App\Entity\Entry {#1831}
+magazine: App\Entity\Magazine {#260}
+image: null
+parent: null
+root: null
+body: """
I just spent a bit too much time making this (it was fun), so don’t even tell me if you’re not going to use it.\n
\n
You can open up a desired book’s page, start this first script in the console, and then scroll through the book:\n
\n
```\n
\n
<span style="color:#323232;">let imgs = new Set();\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">function cheese() { \n
</span><span style="color:#323232;"> for(let img of document.getElementsByTagName("img")) {\n
</span><span style="color:#323232;"> if(img.parentElement.parentElement.className == "pageImageDisplay") imgs.add(img.attributes["src"].value);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">setInterval(cheese, 5);\n
</span>\n
```\n
\n
And once you’re done you may run this script to download each image:\n
\n
```\n
\n
<span style="color:#323232;">function toDataURL(url) {\n
</span><span style="color:#323232;"> return fetch(url).then((response) => {\n
</span><span style="color:#323232;"> return response.blob();\n
</span><span style="color:#323232;"> }).then(blob => {\n
</span><span style="color:#323232;"> return URL.createObjectURL(blob);\n
</span><span style="color:#323232;"> });\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">async function asd() {\n
</span><span style="color:#323232;"> for(let img of imgs) {\n
</span><span style="color:#323232;"> const a = document.createElement("a");\n
</span><span style="color:#323232;"> a.href = await toDataURL(img);\n
</span><span style="color:#323232;"> let name;\n
</span><span style="color:#323232;"> for(let thing of img.split("&")) {\n
</span><span style="color:#323232;"> if(thing.startsWith("pg=")) {\n
</span><span style="color:#323232;"> name = thing.split("=")[1];\n
</span><span style="color:#323232;"> console.log(name);\n
</span><span style="color:#323232;"> break;\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;"> a.download = name;\n
</span><span style="color:#323232;"> document.body.appendChild(a);\n
</span><span style="color:#323232;"> a.click();\n
</span><span style="color:#323232;"> document.body.removeChild(a);\n
</span><span style="color:#323232;"> }\n
</span><span style="color:#323232;">}\n
</span><span style="color:#323232;">\n
</span><span style="color:#323232;">asd();\n
</span>\n
```\n
\n
Alternatively you may simply run something like this to get the links:\n
\n
```\n
\n
<span style="color:#323232;">for(let img of imgs) {\n
</span><span style="color:#323232;">\tconsole.log(img)\n
</span><span style="color:#323232;">}\n
</span>\n
```\n
\n
There’s stuff you can tweak of course if it don’t quite work for you. Worked fine on me tests.\n
\n
If you notice a page missing, you should be able to just scroll back to it and then download again to get everything. The first script just keeps collecting pages till you refresh the site. Which also means you should refresh once you are done downloading, as it eats CPU for breakfast.\n
\n
Oh and ***NEVER RUN ANY JAVASCRIPT CODE SOMEONE ON THE INTERNET TELLS YOU TO RUN***
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1729502222 {#4025
date: 2024-10-21 11:17:02.0 +02:00
}
+ip: null
+tags: [
"323232"
]
+mentions: [
"@antonim@lemmy.dbzer0.com"
]
+children: Doctrine\ORM\PersistentCollection {#4014 …}
+nested: Doctrine\ORM\PersistentCollection {#4012 …}
+votes: Doctrine\ORM\PersistentCollection {#4010 …}
+reports: Doctrine\ORM\PersistentCollection {#4008 …}
+favourites: Doctrine\ORM\PersistentCollection {#3976 …}
+notifications: Doctrine\ORM\PersistentCollection {#3980 …}
-id: 30017
-bodyTs: "'1':129 '5':71 'a.click':137 'a.download':133 'a.href':113 'abl':190 'also':217 'altern':141 'asd':103,140 'async':101 'await':114 'back':194 'bit':5 'blob':97,100 'book':34,49 'break':132 'breakfast':232 'chees':55,70 'code':239 'collect':209 'consol':43 'console.log':130,158 'const':109 'cours':167 'cpu':230 'desir':33 'document.body.appendchild':135 'document.body.removechild':138 'document.createelement':111 'document.getelementsbytagname':60 'done':76,225 'download':83,199,226 'eat':229 'even':17 'everyth':203 'fetch':90 'fine':177 'first':39,205 'fun':13 'function':54,86,102 'get':150,202 'go':24 'imag':85 'img':51,58,61,106,108,116,155,157,159 'img.attributes':66 'img.parentelement.parentelement.classname':63 'img.split':123 'imgs.add':65 'internet':243 'javascript':238 'keep':208 'let':50,57,105,117,120,154 'like':147 'link':152 'make':9 'may':78,143 'mean':218 'miss':186 'much':7 'name':118,127,131,134 'never':235 'new':52 'notic':183 'oh':233 'open':30 'page':36,185,210 'pageimagedisplay':64 'pg':126 'quit':172 're':22,75 'refresh':213,221 'respons':93 'response.blob':95 'return':89,94,98 'run':79,145,236,247 'script':40,81,206 'scroll':46,193 'set':53 'setinterv':69 'simpli':144 'site':215 'someon':240 'someth':146 'spent':3 'src':67 'start':37 'stuff':162 'tell':18,244 'test':180 'thing':121 'thing.split':128 'thing.startswith':125 'till':211 'time':8 'todataurl':87,115 'tweak':165 'url':88,91 'url.createobjecturl':99 'use':26 'valu':68 'work':173,176"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/3445560"
+editedAt: null
+createdAt: DateTimeImmutable @1694484327 {#3961
date: 2023-09-12 04:05:27.0 +02:00
}
}
+root: App\Entity\EntryComment {#4016}
+body: """
Well, I may be technologically semi-literate and I may have felt a bit dizzy when I saw actual code in your comment, but I sure as hell will find a way to put it to use, no matter the cost.\n
\n
You’re terrific, man. No idea what else to say.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1694533159 {#4039
date: 2023-09-12 17:39:19.0 +02:00
}
+ip: null
+tags: null
+mentions: [
"@antonim@lemmy.dbzer0.com"
"@bela@lemm.ee"
]
+children: Doctrine\ORM\PersistentCollection {#4044 …}
+nested: Doctrine\ORM\PersistentCollection {#4043 …}
+votes: Doctrine\ORM\PersistentCollection {#4052 …}
+reports: Doctrine\ORM\PersistentCollection {#4033 …}
+favourites: Doctrine\ORM\PersistentCollection {#4031 …}
+notifications: Doctrine\ORM\PersistentCollection {#4029 …}
-id: 351202
-bodyTs: "'actual':20 'bit':15 'code':21 'comment':24 'cost':42 'dizzi':16 'els':50 'felt':13 'find':31 'hell':29 'idea':48 'liter':8 'man':46 'matter':40 'may':3,11 'put':35 're':44 'saw':19 'say':52 'semi':7 'semi-liter':6 'sure':27 'technolog':5 'terrif':45 'use':38 'way':33 'well':1"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.dbzer0.com/comment/2975607"
+editedAt: null
+createdAt: DateTimeImmutable @1694533159 {#4036
date: 2023-09-12 17:39:19.0 +02:00
}
} |
|
Show voter details
|
13 |
DENIED
|
edit
|
App\Entity\Magazine {#260
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#268
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#247 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#243 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#232 …}
+entries: Doctrine\ORM\PersistentCollection {#190 …}
+posts: Doctrine\ORM\PersistentCollection {#148 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#210 …}
+bans: Doctrine\ORM\PersistentCollection {#127 …}
+reports: Doctrine\ORM\PersistentCollection {#113 …}
+badges: Doctrine\ORM\PersistentCollection {#91 …}
+logs: Doctrine\ORM\PersistentCollection {#81 …}
+awards: Doctrine\ORM\PersistentCollection {#70 …}
+categories: Doctrine\ORM\PersistentCollection {#59 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#266
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#274
date: 2023-09-26 13:19:52.0 +02:00
}
} |
|
Show voter details
|