1
1
mirror of https://github.com/MarginaliaSearch/MarginaliaSearch.git synced 2025-10-06 07:32:38 +02:00

Compare commits

...

148 Commits

Author SHA1 Message Date
Viktor Lofgren
3d179cddce (crawler) Correctly consume entity in sitemap retrieval 2025-04-18 00:32:21 +02:00
Viktor Lofgren
1a2aae496a (crawler) Correct handling and abortion of HttpClient's requests
There was a resource leak in the initial implementation of the Apache HttpClient WarcInputBuffer that failed to free up resources.

Using HttpGet objects instead of the Classic...Request objects, as the latter fail to expose an abort()-method.
2025-04-18 00:16:26 +02:00
Viktor Lofgren
353cdffb3f (crawler) Increase connection request timeout, restore congestion timeout 2025-04-17 21:32:06 +02:00
Viktor Lofgren
2e3f1313c7 (crawler) Log exceptions while crawling in crawler audit log 2025-04-17 21:18:09 +02:00
Viktor Lofgren
58e6f141ce (crawler) Reduce congestion throttle go-rate 2025-04-17 20:36:58 +02:00
Viktor Lofgren
500f63e921 (crawler) Lower max conn per route 2025-04-17 18:36:16 +02:00
Viktor Lofgren
6dfbedda1e (crawler) Increase max conn per route and connection timeout 2025-04-17 18:31:46 +02:00
Viktor Lofgren
9715ddb105 (crawler) Increase max pool size to a large value 2025-04-17 18:22:58 +02:00
Viktor Lofgren
1fc6313a77 (crawler) Remove log noise when retrying a bad URL 2025-04-17 17:10:46 +02:00
Viktor Lofgren
b1249d5b8a (crawler) Fix broken test. 2025-04-17 17:01:42 +02:00
Viktor
ef95d59b07 Merge pull request #161 from MarginaliaSearch/apache-httpclient-in-crawler
The previously used Java HttpClient seems unsuitable for crawler usage, that lead to issues like send()-operations sometimes hanging forever, with clunky workarounds such as running each send operation in a separate Future that can be cancelled on a timeout.

The most damning flaw is that it does not offer socket timeouts. If a server responds in a timely manner, but for some reason between high load or malice stops sending data, Java's builtin HttpClient will hang forever.

It simply has too many assumptions that break, and fails to adequately expose the inner workings of the connection pool to a degree that makes it possible to configure in a satisfactory manner, such as setting a SO_LINGER value or limiting the number of concurrent connections to a host.

Apache's HttpClient solves all these problems.

The change also includes a new battery of tests for the HttpFetcher, and refactors the retriever class a bit to move stuff into the HttpFetcher, leading to a better separation of concerns.

The crawler will also be a bit more clever when fetching documents, and attempt to use range queries where supported to limit the number of bytes, as interrupting connections is undesirable and leads to connection storms and bufferbloat.
2025-04-17 16:57:19 +02:00
Viktor Lofgren
acdd8664f5 (crawler) More logging for the crawler, in a separate file. 2025-04-17 16:55:50 +02:00
Viktor Lofgren
6b12eac58a (crawler) Fix crawler retriever test to use the slop format 2025-04-17 16:35:13 +02:00
Viktor Lofgren
bb3f1f395a (crawler) Fix bug where headers were not stored correctly
This was the result of refactoring to Apache HttpClient.
2025-04-17 16:34:41 +02:00
Viktor Lofgren
b661beef41 (crawler) Amend recrawl logic to match redirects as being unchanged if their Location is the same. 2025-04-17 16:34:05 +02:00
Viktor Lofgren
9888c47f19 (crawler) Add custom Keep-Alive settings for HttpClient with max keep-alive of 30s 2025-04-17 15:25:46 +02:00
Viktor Lofgren
dcef7e955b (crawler) Try to avoid unnecessary connection resets
In order to keep connections alive, the crawler will consume data past it's max size (but hope and pray the server supports range queries) as long as we've not exceeded the timeout.

This permits us to keep the connection alive in more scenarios, which is helpful for the health of the network stack, as constant TCP handshakes can lead to quite a lot of buffer bloat.

This will increase the bandwidth requirements in some scenarios, but on the other hand, it will increase the available bandwidth as well.
2025-04-17 14:51:33 +02:00
Viktor Lofgren
b3973a1dd7 (crawler) Remove unnecessary crawl delay when not ct-probing
The crawler would *always* incur the crawl delay penalty associated with content type probing, even if it wasn't actually probing.  Removing this delay when we are not probing.
2025-04-17 14:39:04 +02:00
Viktor Lofgren
8bd05d6d90 (crawler) Attempt to use range queries where available
This might help in some circumstances to avoid fetching more data than we are interested in.
2025-04-17 14:37:55 +02:00
Viktor Lofgren
59df8e356e (crawler) Do not fail domain and content type probe on 405
Some endpoints do not support the HEAD method.  This has historically broken the crawler when it attempts to use HEAD to probe certain URLs that are suspected of being e.g. binary.

The change makes it so that we bypass the probing on 405 instead, and for the domain probe logic, we switch to a small range queried GET.
2025-04-17 13:54:28 +02:00
Viktor Lofgren
7161162a35 (crawler) Write WARC records in a sane order 2025-04-17 13:36:39 +02:00
Viktor Lofgren
d7c4c5141f (crawler) Migrate to Apache HttpClient for crawler
The previously used Java HttpClient seems unsuitable for crawler usage,
that lead to issues like send()-operations sometimes hanging forever,
with clunky workarounds such as running each send operation in a separate
Future that can be cancelled on a timeout.

It has too many assumptions that break, and fails to adequately expose
the inner workings of the connection pool to a degree that makes it possible
to configure in a satisfactory manner.

Apache's HttpClient solves all these problems.

The change also includes a new battery of tests for the HttpFetcher,
and refactors the retriever class a bit to move stuff into the HttpFetcher,
leading to a better separation of concerns.
2025-04-17 12:51:08 +02:00
Viktor Lofgren
88e9b8fb05 (crawler) Throttle the establishment of new connections
To avoid network congestion from the packet storm created when establishing hundreds or thousands of connections at the same time, pace the opening of new connections.
2025-04-08 22:53:02 +02:00
Viktor Lofgren
b6265cee11 (feeds) Add timeout code to send()
Due to the unique way java's HttpClient implements timeouts, we must always wrap it in an executor to catch the scenario that a server stops sending data mid-response, which would otherwise hang the send method forever.
2025-04-08 22:09:59 +02:00
Viktor Lofgren
c91af247e9 (rate-limit) Fix rate limiting logic
The rate limiter was misconfigured to regenerate tokens at a fixed rate of 1 per refillRate; not refillRate per minute.  Additionally increasing the default bucket size to 4x refill rate.
2025-04-05 12:26:26 +02:00
Viktor Lofgren
7a31227de1 (crawler) Filter out robots.txt-sitemaps that belong to different domains 2025-04-02 13:35:37 +02:00
Viktor Lofgren
4f477604c5 (crawler) Improve error handling in parquet->slop conversion
Parquet code throws a RuntimeException, which was not correctly caught, leading to a failure to crawl.
2025-04-02 13:16:01 +02:00
Viktor Lofgren
2970f4395b (minor) Test code cleanup 2025-04-02 13:16:01 +02:00
Viktor Lofgren
d1ec909b36 (crawler) Improve handling of timeouts to prevent crawler from getting stuck 2025-04-02 12:57:21 +02:00
Viktor Lofgren
c67c5bbf42 (crawler) Experimentally drop to HTTP 1.1 for crawler to see if this solves stuck send()s 2025-04-01 12:05:21 +02:00
Viktor Lofgren
ecb0e57a1a (crawler) Make the use of virtual threads in the crawler configurable via system properties 2025-03-27 21:26:05 +01:00
Viktor Lofgren
8c61f61b46 (crawler) Add crawling metadata to domainstate db 2025-03-27 16:38:37 +01:00
Viktor Lofgren
662a18c933 Revert "(crawler) Further rearrange crawl order"
This reverts commit 1c2426a052.

The change does not appear necessary to avoid problems.
2025-03-27 11:25:08 +01:00
Viktor Lofgren
1c2426a052 (crawler) Further rearrange crawl order
Limit crawl order preferrence to edu domains, to avoid hitting stuff like medium and wordpress with shotgun requests.
2025-03-27 11:19:20 +01:00
Viktor Lofgren
34df7441ac (crawler) Add some jitter to crawl delay to avoid accidentally synchronized requests 2025-03-27 11:15:16 +01:00
Viktor Lofgren
5387e2bd80 (crawler) Adjust crawl order to get a better mixture of domains 2025-03-27 11:12:48 +01:00
Viktor Lofgren
0f3b24d0f8 (crawler) Evaluate virtual threads for the crawler
The change also alters SimpleBlockingThreadPool to add the option to use virtual threads instead of platform threads.
2025-03-27 11:02:21 +01:00
Viktor Lofgren
a732095d2a (crawler) Improve crawl task ordering
Further improve the ordering of the crawl tasks in order to ensure that potentially blocking tasks are enqueued as soon as possible.
2025-03-26 16:51:37 +01:00
Viktor Lofgren
6607f0112f (crawler) Improve how the crawler deals with interruptions
In some cases, it threads would previously fail to terminate when interrupted.
2025-03-26 16:19:57 +01:00
Viktor Lofgren
4913730de9 (jdk) Upgrade to Java 24 2025-03-26 13:26:06 +01:00
Viktor Lofgren
1db64f9d56 (chore) Fix zookeeper test by upgrading zk image version.
Test suddenly broke due to the increasing entropy of the universe.
2025-03-26 11:47:14 +01:00
Viktor Lofgren
4dcff14498 (search) Improve contrast with light mode 2025-03-25 13:15:31 +01:00
Viktor Lofgren
426658f64e (search) Improve contrast with light mode 2025-03-25 11:54:54 +01:00
Viktor Lofgren
2181b22f05 (crawler) Change default maxConcurrentRequests to 512
This seems like a more sensible default after testing a bit.  May need local tuning.
2025-03-22 12:11:09 +01:00
Viktor Lofgren
42bd79a609 (crawler) Experimentally throttle the number of active retrievals to see how this affects the network performance
There's been some indications that request storms lead to buffer bloat and bad throughput.

This adds a configurable semaphore, by default permitting 100 active requests.
2025-03-22 11:50:37 +01:00
Viktor Lofgren
b91c1e528a (favicon) Send dummy svg result when image is missing
This prevents the browser from rendering a "broken image" in this scenario.
2025-03-21 15:15:14 +01:00
Viktor Lofgren
b1130d7a04 (domainstatedb) Allow creation of disconnected db
This is required for executor services that do not have crawl data to still be able to initialize.
2025-03-21 14:59:36 +01:00
Viktor Lofgren
8364bcdc97 (favicon) Add favicons to the matchograms 2025-03-21 14:30:40 +01:00
Viktor Lofgren
626cab5fab (favicon) Add favicon to site overview 2025-03-21 14:15:23 +01:00
Viktor Lofgren
cfd4712191 (favicon) Add capability for fetching favicons 2025-03-21 13:38:58 +01:00
Viktor Lofgren
9f18ced73d (crawler) Improve deferred task behavior 2025-03-18 12:54:18 +01:00
Viktor Lofgren
18e91269ab (crawler) Improve deferred task behavior 2025-03-18 12:25:22 +01:00
Viktor Lofgren
e315ca5758 (search) Change icon for small web filter
The previous icon was of an irregular size and shifted the layout in an unaesthetic way.
2025-03-17 12:07:34 +01:00
Viktor Lofgren
3ceea17c1d (search) Adjustments to devicd detection in CSS
Use pointer:fine media query to better distinguish between mobile devices and PCs with a window in portrait orientation.

With this, we never show mobile filtering functionality on mobile; and never show the touch-inaccessible minimized sidebar on mobile.
2025-03-17 12:04:34 +01:00
Viktor Lofgren
b34527c1a3 (search) Add small web filter for new UI 2025-03-17 11:39:19 +01:00
Viktor Lofgren
185bf28fca (crawler) Correct issue leading to parquet files not being correctly preconverted
Path.endsWith("str") != String.endsWith(".str")
2025-03-10 13:48:12 +01:00
Viktor Lofgren
78cc25584a (crawler) Add error logging when entering bad path for historical crawl data 2025-03-10 13:38:40 +01:00
Viktor Lofgren
62ba30bacf (common) Log info about metrics server 2025-03-10 13:12:39 +01:00
Viktor Lofgren
3bb84eb206 (common) Log info about metrics server 2025-03-10 13:03:48 +01:00
Viktor Lofgren
be7d13ccce (crawler) Correct task execution logic in crawler
The old behavior would flag domains as pending too soon, leading to them being omitted from execution if they were not immediately available to run.
2025-03-09 13:47:51 +01:00
Viktor Lofgren
8c088a7c0b (crawler) Remove custom thread factory
This was causing issues, and not really doing much of benefit.
2025-03-09 11:50:52 +01:00
Viktor Lofgren
ea9a642b9b (crawler) More effective task scheduling in the crawler
This should hopefully allow more threads to be busy
2025-03-09 11:44:59 +01:00
Viktor Lofgren
27f528af6a (search) Fix "Remove Javascript" toggle
A bug was introduced at some point where the special keyword for filtering on javascript was changed to special:scripts, from js:true/js:false.

Solves issue #155
2025-02-28 12:03:04 +01:00
Viktor Lofgren
20ca41ec95 (processed model) Use String columns instead of Txt columns for SlopDocumentRecord
It's very likely TxtStringColumn is the culprit of the bug seen in https://github.com/MarginaliaSearch/MarginaliaSearch/issues/154 where the wrong URL was shown for a search result.
2025-02-24 11:41:51 +01:00
Viktor Lofgren
7671f0d9e4 (search) Display message when no search results are found 2025-02-24 11:15:55 +01:00
Viktor Lofgren
44d6bc71b7 (assistant) Migrate to Jooby framework 2025-02-15 13:28:12 +01:00
Viktor Lofgren
9d302e2973 (assistant) Migrate to Jooby framework 2025-02-15 13:26:04 +01:00
Viktor Lofgren
f553701224 (assistant) Migrate to Jooby framework 2025-02-15 13:21:48 +01:00
Viktor Lofgren
f076d05595 (deps) Upgrade slf4j to latest 2025-02-15 12:50:16 +01:00
Viktor Lofgren
b513809710 (*) Stopgap fix for metrics server initialization errors bringing down services 2025-02-14 17:09:48 +01:00
Viktor Lofgren
7519b28e21 (search) Correct exception from misbehaving bots feeding invalid urls 2025-02-14 17:05:24 +01:00
Viktor Lofgren
3eac4dd57f (search) Correct exception in error handler when page is missing 2025-02-14 17:00:21 +01:00
Viktor Lofgren
4c2810720a (search) Add redirect handler for full URLs in the /site endpoint 2025-02-14 16:31:11 +01:00
Viktor Lofgren
8480ba8daa (live-capture) Code cleanup 2025-02-04 14:05:36 +01:00
Viktor Lofgren
fbba392491 (live-capture) Send a UA-string from the browserless fetcher as well
The change also introduces a somewhat convoluted wiremock test to intercept and verify that these headers are in fact sent
2025-02-04 13:36:49 +01:00
Viktor Lofgren
530eb35949 (update-rss) Do not fail the feed fetcher control actor if it takes a long time to complete. 2025-02-03 11:35:32 +01:00
Viktor Lofgren
c2dd2175a2 (search) Add new query expansion rule contracting WORD NUM pairs into WORD-NUM and WORDNUM 2025-02-01 13:13:30 +01:00
Viktor Lofgren
b8581b0f56 (crawler) Safe sanitization of headers during warc->slop conversion
The warc->slop converter was rejecting some items because they had headers that were representable in the Warc code's MessageHeader map implementation, but illegal in the HttpHeaders' implementation.

Fixing this by manually filtering these out.  Ostensibly the constructor has a filtering predicate, but this annoyingly runs too late and fails to prevent the problem.
2025-01-31 12:47:42 +01:00
Viktor Lofgren
2ea34767d8 (crawler) Use the response URL when resolving relative links
The crawler was incorrectly using the request URL as the base URL when resolving relative links.  This caused problems when encountering redirects.

 For example if we fetch /log, redirecting to  /log/ and find links to foo/, and bar/; these would resolve to /foo and /bar, and not /log/foo and /log/bar.
2025-01-31 12:40:13 +01:00
Viktor Lofgren
e9af838231 (actor) Fix migration actor final steps 2025-01-30 11:48:21 +01:00
Viktor Lofgren
ae0cad47c4 (actor) Utility method for getting a json prototype for actor states
If we can hook this into the control gui somehow, it'll make for a nice QOL upgrade when manually interacting with the actors.
2025-01-29 15:20:25 +01:00
Viktor Lofgren
5fbc8ef998 (misc) Tidying 2025-01-29 15:17:04 +01:00
Viktor Lofgren
32c6dd9e6a (actor) Delete old data in the migration actor 2025-01-29 14:51:46 +01:00
Viktor Lofgren
6ece6a6cfb (actor) Improve resilience for the migration actor 2025-01-29 14:43:09 +01:00
Viktor Lofgren
39cd1c18f8 Automatically run npm install tailwindcss@3 via setup.sh, as the new default version of the package is incompatible with the project 2025-01-29 12:21:08 +01:00
Viktor
eb65daaa88 Merge pull request #151 from Lionstiger/master
fix small grammar error in footerLegal.jte
2025-01-28 21:49:50 +01:00
Viktor
0bebdb6e33 Merge branch 'master' into master 2025-01-28 21:49:36 +01:00
Viktor Lofgren
1e50e392c6 (actor) Improve logging and error handling for data migration actor 2025-01-28 15:34:36 +01:00
Viktor Lofgren
fb673de370 (crawler) Change the header 'User-agent' to 'User-Agent' 2025-01-28 15:34:16 +01:00
Viktor Lofgren
eee73ab16c (crawler) Be more lenient when performing a domain probe 2025-01-28 15:24:30 +01:00
Viktor Lofgren
5354e034bf (search) Minor grammar fix 2025-01-27 18:36:31 +01:00
Magnus Wulf
72384ad6ca fix small grammar error 2025-01-27 15:04:57 +01:00
Viktor Lofgren
a2b076f9be (converter) Add progress tracking for big domains in converter 2025-01-26 18:03:59 +01:00
Viktor Lofgren
c8b0a32c0f (crawler) Reduce long retention of CrawlDataReference objects and their associated SerializableCrawlDataStreams 2025-01-26 15:40:17 +01:00
Viktor Lofgren
f0d74aa3bb (converter) Fix close() ordering to prevent converter crash 2025-01-26 14:47:36 +01:00
Viktor Lofgren
74a1f100f4 (converter) Refactor to remove CrawledDomainReader and move its functionality into SerializableCrawlDataStream 2025-01-26 14:46:50 +01:00
Viktor Lofgren
eb049658e4 (converter) Add truncation att the parser step to prevent the converter from spending too much time on excessively large documents
Refactor to do this without introducing additional copies
2025-01-26 14:28:53 +01:00
Viktor Lofgren
db138b2a6f (converter) Add truncation att the parser step to prevent the converter from spending too much time on exessively large documents 2025-01-26 14:25:57 +01:00
Viktor Lofgren
1673fc284c (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:21:46 +01:00
Viktor Lofgren
503ea57d5b (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:18:14 +01:00
Viktor Lofgren
18ca926c7f (converter) Truncate excessively long strings in SentenceExtractor, malformed data was effectively DOS:ing the converter 2025-01-26 12:52:54 +01:00
Viktor Lofgren
db99242db2 (converter) Adding some logging around the simple processing track to investigate an issue with the converter stalling 2025-01-26 12:02:00 +01:00
Viktor Lofgren
2b9d2985ba (doc) Update readme with up-to-date install instructions. 2025-01-24 18:51:41 +01:00
Viktor Lofgren
eeb6ecd711 (search) Make it clearer that the affiliate marker applies to the result, and not the search engine's relation to the result. 2025-01-24 18:50:00 +01:00
Viktor Lofgren
1f58aeadbf (build) Upgrade JIB 2025-01-24 18:49:28 +01:00
Viktor Lofgren
3d68be64da (crawler) Add default CT when it's missing for icons 2025-01-22 13:55:47 +01:00
Viktor Lofgren
668f3b16ef (search) Redirect ^/site/$ to /site 2025-01-22 13:35:18 +01:00
Viktor Lofgren
98a340a0d1 (crawler) Add favicon data to domain state db in its own table 2025-01-22 11:41:20 +01:00
Viktor Lofgren
8862100f7e (crawler) Improve logging and error handling 2025-01-21 21:44:21 +01:00
Viktor Lofgren
274941f6de (crawler) Smarter parquet->slop crawl data migration 2025-01-21 21:26:12 +01:00
Viktor Lofgren
abec83582d Fix refactoring gore 2025-01-21 15:08:04 +01:00
Viktor Lofgren
569520c9b6 (index) Add manual adjustments for rankings based on domain 2025-01-21 15:07:43 +01:00
Viktor Lofgren
088310e998 (converter) Improve simple processing performance
There was a regression introduced in the recent slop migration changes in  the performance of the simple conversion track.  This reverts the issue.
2025-01-21 14:13:33 +01:00
Viktor
270cab874b Merge pull request #134 from MarginaliaSearch/slop-crawl-data-spike
Store crawl data in slop instead of parquet
2025-01-21 13:34:22 +01:00
Viktor Lofgren
4c74e280d3 (crawler) Fix urlencoding in sitemap fetcher 2025-01-21 13:33:35 +01:00
Viktor Lofgren
5b347e17ac (crawler) Automatically migrate to slop from parquet when crawling 2025-01-21 13:33:14 +01:00
Viktor Lofgren
55d6ab933f Merge branch 'master' into slop-crawl-data-spike 2025-01-21 13:32:58 +01:00
Viktor Lofgren
43b74e9706 (crawler) Fix exception handler and resource leak in WarcRecorder 2025-01-20 23:45:28 +01:00
Viktor Lofgren
579a115243 (crawler) Reduce log spam from error handling in new sitemap fetcher 2025-01-20 23:17:13 +01:00
Viktor
2c67f50a43 Merge pull request #150 from MarginaliaSearch/httpclient-in-crawler
Reduce the use of 3rd party code in the crawler
2025-01-20 19:35:30 +01:00
Viktor Lofgren
78a958e2b0 (crawler) Fix broken test that started failing after the search engine moved to a new domain 2025-01-20 18:52:14 +01:00
Viktor Lofgren
4e939389b2 (crawler) New Jsoup based sitemap parser 2025-01-20 14:37:44 +01:00
Viktor Lofgren
e67a9bdb91 (crawler) Migrate away from using OkHttp in the crawler, use Java's HttpClient instead. 2025-01-19 15:07:11 +01:00
Viktor Lofgren
567e4e1237 (crawler) Fast detection and bail-out for crawler traps
Improve logging and exclude robots.txt from this logic.
2025-01-18 15:28:54 +01:00
Viktor Lofgren
4342e42722 (crawler) Fast detection and bail-out for crawler traps
Nephentes has been doing the rounds in social media, adding an easy detection and mitigation mechanism for this type of trap, as sadly not all webmasters set up their robots.txt correctly.  Out of the box crawl limits will also deal with this type of attack, but this fix is faster.
2025-01-17 13:02:57 +01:00
Viktor Lofgren
bc818056e6 (run) Fix templates for mariadb
Apparently the docker image contract changed at some point, and now we should spawn mariadbd and not mysqld; mariadb-admin and not mysqladmin.
2025-01-16 15:27:02 +01:00
Viktor Lofgren
de2feac238 (chore) Upgrade jib from 3.4.3 to 3.4.4 2025-01-16 15:10:45 +01:00
Viktor Lofgren
1e770205a5 (search) Dyslexia fix 2025-01-12 20:40:14 +01:00
Viktor
e44ecd6d69 Merge pull request #149 from MarginaliaSearch/vlofgren-patch-1
Update ROADMAP.md
2025-01-12 20:38:36 +01:00
Viktor
5b93a0e633 Update ROADMAP.md 2025-01-12 20:38:11 +01:00
Viktor
08fb0e5efe Update ROADMAP.md 2025-01-12 20:37:43 +01:00
Viktor
bcf67782ea Update ROADMAP.md 2025-01-12 20:37:09 +01:00
Viktor Lofgren
ef3f175ede (search) Don't clobber the search query URL with default values 2025-01-10 15:57:30 +01:00
Viktor Lofgren
bbe4b5d9fd Revert experimental changes 2025-01-10 15:52:02 +01:00
Viktor Lofgren
c67a635103 (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:44:44 +01:00
Viktor Lofgren
20b24133fb (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:34:48 +01:00
Viktor Lofgren
f2567677e8 (index-client) Clean up index client code
Improve error handling.  This should be a relatively rare case, but we don't want one bad index partition to blow up the entire query.
2025-01-10 15:17:07 +01:00
Viktor Lofgren
bc2c2061f2 (index-client) Clean up index client code
This should have the rpc stream reception be performed in parallel in separate threads, rather blocking sequentially in the main thread, hopefully giving a slight performance boost.
2025-01-10 15:14:42 +01:00
Viktor Lofgren
1c7f5a31a5 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:17:29 +01:00
Viktor Lofgren
59a8ea60f7 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:15:22 +01:00
Viktor Lofgren
aa9b1244ea (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:56:04 +01:00
Viktor Lofgren
2d17233366 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:53:56 +01:00
Viktor Lofgren
b245cc9f38 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:46:19 +01:00
Viktor Lofgren
6614d05bdf (db) Make db pool size configurable 2025-01-09 20:20:51 +01:00
Viktor Lofgren
47e58a21c6 Refactor documentBody method and ContentType charset handling
Updated the `documentBody` method to improve parsing retries and error handling. Refactored `ContentType` charset processing with cleaner logic, removing redundant handling for unsupported charsets. Also, updated the version of the `slop` library in dependency settings.
2024-12-17 17:11:37 +01:00
Viktor Lofgren
3714104976 Add loader for slop data in converter.
Also alter CrawledDocument to not require String parsing of the underlying byte[] data.  This should reduce the number of large memory allocations quite significantly, hopefully reducing the GC churn a bit.
2024-12-17 15:40:24 +01:00
Viktor Lofgren
f6f036b9b1 Switch to new Slop format for crawl data storage and processing.
Replaces Parquet output and processing with the new Slop-based format. Includes data migration functionality, updates to handling and writing of crawl data, and introduces support for SLOP in domain readers and converters.
2024-12-15 19:34:03 +01:00
Viktor Lofgren
b510b7feb8 Spike for storing crawl data in slop instead of parquet
This seems to reduce RAM overhead to 100s of MB (from ~2 GB), as well as roughly double the read speeds.  On disk size is virtually identical.
2024-12-15 15:49:47 +01:00
179 changed files with 5158 additions and 2013 deletions

View File

@@ -1,4 +1,4 @@
# Roadmap 2024-2025 # Roadmap 2025
This is a roadmap with major features planned for Marginalia Search. This is a roadmap with major features planned for Marginalia Search.
@@ -30,12 +30,6 @@ Retaining the ability to independently crawl the web is still strongly desirable
The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) ) The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) )
combined with naive bayesian filter would go a long way, or something more sophisticated...? combined with naive bayesian filter would go a long way, or something more sophisticated...?
## Web Design Overhaul
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
In progress: PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127) -- demo available at https://test.marginalia.nu/
## Additional Language Support ## Additional Language Support
It would be desirable if the search engine supported more languages than English. This is partially about It would be desirable if the search engine supported more languages than English. This is partially about
@@ -62,8 +56,31 @@ filter for any API consumer.
I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this. I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this.
## Show favicons next to search results
This is expected from search engines. Basic proof of concept sketch of fetching this data has been done, but the feature is some way from being reality.
## Specialized crawler for github
One of the search engine's biggest limitations right now is that it does not index github at all. A specialized crawler that fetches at least the readme.md would go a long way toward providing search capabilities in this domain.
# Completed # Completed
## Web Design Overhaul (COMPLETED 2025-01)
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)
## Proper Position Index (COMPLETED 2024-09) ## Proper Position Index (COMPLETED 2024-09)
The search engine uses a fixed width bit mask to indicate word positions. It has the benefit The search engine uses a fixed width bit mask to indicate word positions. It has the benefit
@@ -76,11 +93,3 @@ list, as is the civilized way of doing this.
Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99) Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)

View File

@@ -5,7 +5,7 @@ plugins {
// This is a workaround for a bug in the Jib plugin that causes it to stall randomly // This is a workaround for a bug in the Jib plugin that causes it to stall randomly
// https://github.com/GoogleContainerTools/jib/issues/3347 // https://github.com/GoogleContainerTools/jib/issues/3347
id 'com.google.cloud.tools.jib' version '3.4.3' apply(false) id 'com.google.cloud.tools.jib' version '3.4.4' apply(false)
} }
group 'marginalia' group 'marginalia'
@@ -43,12 +43,11 @@ subprojects.forEach {it ->
} }
ext { ext {
jvmVersion=23 jvmVersion = 24
dockerImageBase='container-registry.oracle.com/graalvm/jdk:23' dockerImageBase='container-registry.oracle.com/graalvm/jdk:24'
dockerImageTag='latest' dockerImageTag='latest'
dockerImageRegistry='marginalia' dockerImageRegistry='marginalia'
jibVersion = '3.4.3' jibVersion = '3.4.4'
} }
idea { idea {

View File

@@ -24,58 +24,4 @@ public class LanguageModels {
this.fasttextLanguageModel = fasttextLanguageModel; this.fasttextLanguageModel = fasttextLanguageModel;
this.segments = segments; this.segments = segments;
} }
public static LanguageModelsBuilder builder() {
return new LanguageModelsBuilder();
}
public static class LanguageModelsBuilder {
private Path termFrequencies;
private Path openNLPSentenceDetectionData;
private Path posRules;
private Path posDict;
private Path fasttextLanguageModel;
private Path segments;
LanguageModelsBuilder() {
}
public LanguageModelsBuilder termFrequencies(Path termFrequencies) {
this.termFrequencies = termFrequencies;
return this;
}
public LanguageModelsBuilder openNLPSentenceDetectionData(Path openNLPSentenceDetectionData) {
this.openNLPSentenceDetectionData = openNLPSentenceDetectionData;
return this;
}
public LanguageModelsBuilder posRules(Path posRules) {
this.posRules = posRules;
return this;
}
public LanguageModelsBuilder posDict(Path posDict) {
this.posDict = posDict;
return this;
}
public LanguageModelsBuilder fasttextLanguageModel(Path fasttextLanguageModel) {
this.fasttextLanguageModel = fasttextLanguageModel;
return this;
}
public LanguageModelsBuilder segments(Path segments) {
this.segments = segments;
return this;
}
public LanguageModels build() {
return new LanguageModels(this.termFrequencies, this.openNLPSentenceDetectionData, this.posRules, this.posDict, this.fasttextLanguageModel, this.segments);
}
public String toString() {
return "LanguageModels.LanguageModelsBuilder(termFrequencies=" + this.termFrequencies + ", openNLPSentenceDetectionData=" + this.openNLPSentenceDetectionData + ", posRules=" + this.posRules + ", posDict=" + this.posDict + ", fasttextLanguageModel=" + this.fasttextLanguageModel + ", segments=" + this.segments + ")";
}
}
} }

View File

@@ -20,7 +20,11 @@ public class DbDomainQueries {
private final HikariDataSource dataSource; private final HikariDataSource dataSource;
private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class); private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class);
private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build(); private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<EdgeDomain, DomainIdWithNode> domainWithNodeCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<Integer, EdgeDomain> domainNameCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<String, List<DomainWithNode>> siblingsCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
@Inject @Inject
public DbDomainQueries(HikariDataSource dataSource) public DbDomainQueries(HikariDataSource dataSource)
@@ -30,16 +34,21 @@ public class DbDomainQueries {
public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException { public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException {
try (var connection = dataSource.getConnection()) { try {
return domainIdCache.get(domain, () -> { return domainIdCache.get(domain, () -> {
try (var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) { try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString()); stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery(); var rsp = stmt.executeQuery();
if (rsp.next()) { if (rsp.next()) {
return rsp.getInt(1); return rsp.getInt(1);
} }
} }
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException(); throw new NoSuchElementException();
}); });
} }
@@ -49,8 +58,33 @@ public class DbDomainQueries {
catch (ExecutionException ex) { catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause()); throw new RuntimeException(ex.getCause());
} }
catch (SQLException ex) { }
throw new RuntimeException(ex);
public DomainIdWithNode getDomainIdWithNode(EdgeDomain domain) throws NoSuchElementException {
try {
return domainWithNodeCache.get(domain, () -> {
try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery();
if (rsp.next()) {
return new DomainIdWithNode(rsp.getInt(1), rsp.getInt(2));
}
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException();
});
}
catch (UncheckedExecutionException ex) {
throw new NoSuchElementException();
}
catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause());
} }
} }
@@ -84,47 +118,55 @@ public class DbDomainQueries {
} }
public Optional<EdgeDomain> getDomain(int id) { public Optional<EdgeDomain> getDomain(int id) {
try (var connection = dataSource.getConnection()) {
EdgeDomain existing = domainNameCache.getIfPresent(id);
if (existing != null) {
return Optional.of(existing);
}
try (var connection = dataSource.getConnection()) {
try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) { try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) {
stmt.setInt(1, id); stmt.setInt(1, id);
var rsp = stmt.executeQuery(); var rsp = stmt.executeQuery();
if (rsp.next()) { if (rsp.next()) {
return Optional.of(new EdgeDomain(rsp.getString(1))); var val = new EdgeDomain(rsp.getString(1));
domainNameCache.put(id, val);
return Optional.of(val);
} }
return Optional.empty(); return Optional.empty();
} }
} }
catch (UncheckedExecutionException ex) {
throw new RuntimeException(ex.getCause());
}
catch (SQLException ex) { catch (SQLException ex) {
throw new RuntimeException(ex); throw new RuntimeException(ex);
} }
} }
public List<DomainWithNode> otherSubdomains(EdgeDomain domain, int cnt) { public List<DomainWithNode> otherSubdomains(EdgeDomain domain, int cnt) throws ExecutionException {
List<DomainWithNode> ret = new ArrayList<>(); String topDomain = domain.topDomain;
try (var conn = dataSource.getConnection(); return siblingsCache.get(topDomain, () -> {
var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) { List<DomainWithNode> ret = new ArrayList<>();
stmt.setString(1, domain.topDomain);
stmt.setInt(2, cnt);
var rs = stmt.executeQuery(); try (var conn = dataSource.getConnection();
while (rs.next()) { var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) {
var sibling = new EdgeDomain(rs.getString(1)); stmt.setString(1, topDomain);
stmt.setInt(2, cnt);
if (sibling.equals(domain)) var rs = stmt.executeQuery();
continue; while (rs.next()) {
var sibling = new EdgeDomain(rs.getString(1));
ret.add(new DomainWithNode(sibling, rs.getInt(2))); if (sibling.equals(domain))
continue;
ret.add(new DomainWithNode(sibling, rs.getInt(2)));
}
} catch (SQLException e) {
logger.error("Failed to get domain neighbors");
} }
} catch (SQLException e) { return ret;
logger.error("Failed to get domain neighbors"); });
}
return ret;
} }
public record DomainWithNode (EdgeDomain domain, int nodeAffinity) { public record DomainWithNode (EdgeDomain domain, int nodeAffinity) {
@@ -132,4 +174,6 @@ public class DbDomainQueries {
return nodeAffinity > 0; return nodeAffinity > 0;
} }
} }
public record DomainIdWithNode (int domainId, int nodeAffinity) { }
} }

View File

@@ -1,118 +0,0 @@
package nu.marginalia.db;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.OptionalInt;
/** Class used in exporting data. This is intended to be used for a brief time
* and then discarded, not kept around as a service.
*/
public class DbDomainStatsExportMultitool implements AutoCloseable {
private final Connection connection;
private final int nodeId;
private final PreparedStatement knownUrlsQuery;
private final PreparedStatement visitedUrlsQuery;
private final PreparedStatement goodUrlsQuery;
private final PreparedStatement domainNameToId;
private final PreparedStatement allDomainsQuery;
private final PreparedStatement crawlQueueDomains;
private final PreparedStatement indexedDomainsQuery;
public DbDomainStatsExportMultitool(HikariDataSource dataSource, int nodeId) throws SQLException {
this.connection = dataSource.getConnection();
this.nodeId = nodeId;
knownUrlsQuery = connection.prepareStatement("""
SELECT KNOWN_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
visitedUrlsQuery = connection.prepareStatement("""
SELECT VISITED_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
goodUrlsQuery = connection.prepareStatement("""
SELECT GOOD_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
domainNameToId = connection.prepareStatement("""
SELECT ID
FROM EC_DOMAIN
WHERE DOMAIN_NAME=?
""");
allDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
""");
crawlQueueDomains = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM CRAWL_QUEUE
""");
indexedDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
WHERE INDEXED > 0
""");
}
public OptionalInt getVisitedUrls(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, visitedUrlsQuery);
}
public OptionalInt getDomainId(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, domainNameToId);
}
public List<String> getCrawlQueueDomains() throws SQLException {
return executeListQuery(crawlQueueDomains, 100);
}
public List<String> getAllIndexedDomains() throws SQLException {
return executeListQuery(indexedDomainsQuery, 100_000);
}
private OptionalInt executeNameToIntQuery(String domainName, PreparedStatement statement)
throws SQLException {
statement.setString(1, domainName);
var rs = statement.executeQuery();
if (rs.next()) {
return OptionalInt.of(rs.getInt(1));
}
return OptionalInt.empty();
}
private List<String> executeListQuery(PreparedStatement statement, int sizeHint) throws SQLException {
List<String> ret = new ArrayList<>(sizeHint);
var rs = statement.executeQuery();
while (rs.next()) {
ret.add(rs.getString(1));
}
return ret;
}
@Override
public void close() throws SQLException {
knownUrlsQuery.close();
goodUrlsQuery.close();
visitedUrlsQuery.close();
allDomainsQuery.close();
crawlQueueDomains.close();
domainNameToId.close();
connection.close();
}
}

View File

@@ -14,7 +14,7 @@ public class EdgeDomain implements Serializable {
@Nonnull @Nonnull
public final String topDomain; public final String topDomain;
public EdgeDomain(String host) { public EdgeDomain(@Nonnull String host) {
Objects.requireNonNull(host, "domain name must not be null"); Objects.requireNonNull(host, "domain name must not be null");
host = host.toLowerCase(); host = host.toLowerCase();
@@ -61,6 +61,10 @@ public class EdgeDomain implements Serializable {
this.topDomain = topDomain; this.topDomain = topDomain;
} }
public static String getTopDomain(String host) {
return new EdgeDomain(host).topDomain;
}
private boolean looksLikeGovTld(String host) { private boolean looksLikeGovTld(String host) {
if (host.length() < 8) if (host.length() < 8)
return false; return false;
@@ -116,24 +120,6 @@ public class EdgeDomain implements Serializable {
return topDomain.substring(0, cutPoint).toLowerCase(); return topDomain.substring(0, cutPoint).toLowerCase();
} }
public String getLongDomainKey() {
StringBuilder ret = new StringBuilder();
int cutPoint = topDomain.indexOf('.');
if (cutPoint < 0) {
ret.append(topDomain);
} else {
ret.append(topDomain, 0, cutPoint);
}
if (!subDomain.isEmpty() && !"www".equals(subDomain)) {
ret.append(":");
ret.append(subDomain);
}
return ret.toString().toLowerCase();
}
/** If possible, try to provide an alias domain, /** If possible, try to provide an alias domain,
* i.e. a domain name that is very likely to link to this one * i.e. a domain name that is very likely to link to this one
* */ * */

View File

@@ -10,7 +10,9 @@ import java.nio.charset.StandardCharsets;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.util.*; import java.util.HashSet;
import java.util.Optional;
import java.util.Set;
import java.util.function.Function; import java.util.function.Function;
/** WorkLog is a journal of work done by a process, /** WorkLog is a journal of work done by a process,
@@ -61,6 +63,12 @@ public class WorkLog implements AutoCloseable, Closeable {
return new WorkLoadIterable<>(logFile, mapper); return new WorkLoadIterable<>(logFile, mapper);
} }
public static int countEntries(Path crawlerLog) throws IOException{
try (var linesStream = Files.lines(crawlerLog)) {
return (int) linesStream.filter(WorkLogEntry::isJobId).count();
}
}
// Use synchro over concurrent set to avoid competing writes // Use synchro over concurrent set to avoid competing writes
// - correct is better than fast here, it's sketchy enough to use // - correct is better than fast here, it's sketchy enough to use
// a PrintWriter // a PrintWriter

View File

@@ -89,7 +89,7 @@ public class DatabaseModule extends AbstractModule {
config.addDataSourceProperty("prepStmtCacheSize", "250"); config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048"); config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.setMaximumPoolSize(5); config.setMaximumPoolSize(Integer.getInteger("db.poolSize", 5));
config.setMinimumIdle(2); config.setMinimumIdle(2);
config.setMaxLifetime(Duration.ofMinutes(9).toMillis()); config.setMaxLifetime(Duration.ofMinutes(9).toMillis());

View File

@@ -6,6 +6,7 @@ import nu.marginalia.service.ServiceId;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.NetworkInterface; import java.net.NetworkInterface;
import java.util.Enumeration; import java.util.Enumeration;
@@ -115,11 +116,12 @@ public class ServiceConfigurationModule extends AbstractModule {
} }
} }
public static String getLocalNetworkIP() throws Exception { public static String getLocalNetworkIP() throws IOException {
Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces(); Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces();
while (nets.hasMoreElements()) { while (nets.hasMoreElements()) {
NetworkInterface netif = nets.nextElement(); NetworkInterface netif = nets.nextElement();
logger.info("Considering network interface {}: Up? {}, Loopback? {}", netif.getDisplayName(), netif.isUp(), netif.isLoopback());
if (!netif.isUp() || netif.isLoopback()) { if (!netif.isUp() || netif.isLoopback()) {
continue; continue;
} }
@@ -127,6 +129,7 @@ public class ServiceConfigurationModule extends AbstractModule {
Enumeration<InetAddress> inetAddresses = netif.getInetAddresses(); Enumeration<InetAddress> inetAddresses = netif.getInetAddresses();
while (inetAddresses.hasMoreElements()) { while (inetAddresses.hasMoreElements()) {
InetAddress addr = inetAddresses.nextElement(); InetAddress addr = inetAddresses.nextElement();
logger.info("Considering address {}: SiteLocal? {}, Loopback? {}", addr.getHostAddress(), addr.isSiteLocalAddress(), addr.isLoopbackAddress());
if (addr.isSiteLocalAddress() && !addr.isLoopbackAddress()) { if (addr.isSiteLocalAddress() && !addr.isLoopbackAddress()) {
return addr.getHostAddress(); return addr.getHostAddress();
} }

View File

@@ -15,6 +15,7 @@ import org.slf4j.LoggerFactory;
import org.slf4j.Marker; import org.slf4j.Marker;
import org.slf4j.MarkerFactory; import org.slf4j.MarkerFactory;
import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.nio.file.Paths; import java.nio.file.Paths;
import java.util.List; import java.util.List;
@@ -106,9 +107,12 @@ public class JoobyService {
config.externalAddress()); config.externalAddress());
// FIXME: This won't work outside of docker, may need to submit a PR to jooby to allow classpaths here // FIXME: This won't work outside of docker, may need to submit a PR to jooby to allow classpaths here
jooby.install(new JteModule(Path.of("/app/resources/jte"), Path.of("/app/classes/jte-precompiled"))); if (Files.exists(Path.of("/app/resources/jte")) || Files.exists(Path.of("/app/classes/jte-precompiled"))) {
jooby.assets("/*", Paths.get("/app/resources/static")); jooby.install(new JteModule(Path.of("/app/resources/jte"), Path.of("/app/classes/jte-precompiled")));
}
if (Files.exists(Path.of("/app/resources/static"))) {
jooby.assets("/*", Paths.get("/app/resources/static"));
}
var options = new ServerOptions(); var options = new ServerOptions();
options.setHost(config.bindAddress()); options.setHost(config.bindAddress());
options.setPort(restEndpoint.port()); options.setPort(restEndpoint.port());

View File

@@ -6,25 +6,36 @@ import nu.marginalia.service.module.ServiceConfiguration;
import org.eclipse.jetty.server.Server; import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder; import org.eclipse.jetty.servlet.ServletHolder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.InetSocketAddress; import java.net.InetSocketAddress;
public class MetricsServer { public class MetricsServer {
private static final Logger logger = LoggerFactory.getLogger(MetricsServer.class);
@Inject @Inject
public MetricsServer(ServiceConfiguration configuration) throws Exception { public MetricsServer(ServiceConfiguration configuration) {
// If less than zero, we forego setting up a metrics server // If less than zero, we forego setting up a metrics server
if (configuration.metricsPort() < 0) if (configuration.metricsPort() < 0)
return; return;
Server server = new Server(new InetSocketAddress(configuration.bindAddress(), configuration.metricsPort())); try {
Server server = new Server(new InetSocketAddress(configuration.bindAddress(), configuration.metricsPort()));
ServletContextHandler context = new ServletContextHandler(); ServletContextHandler context = new ServletContextHandler();
context.setContextPath("/"); context.setContextPath("/");
server.setHandler(context); server.setHandler(context);
context.addServlet(new ServletHolder(new MetricsServlet()), "/metrics"); context.addServlet(new ServletHolder(new MetricsServlet()), "/metrics");
server.start(); logger.info("MetricsServer listening on {}:{}", configuration.bindAddress(), configuration.metricsPort());
server.start();
}
catch (Exception|NoSuchMethodError ex) {
logger.error("Failed to set up metrics server", ex);
}
} }
} }

View File

@@ -35,21 +35,8 @@ public class RateLimiter {
} }
public static RateLimiter forExpensiveRequest() {
return new RateLimiter(5, 10);
}
public static RateLimiter custom(int perMinute) { public static RateLimiter custom(int perMinute) {
return new RateLimiter(perMinute, 60); return new RateLimiter(4 * perMinute, perMinute);
}
public static RateLimiter forSpamBots() {
return new RateLimiter(120, 3600);
}
public static RateLimiter forLogin() {
return new RateLimiter(3, 15);
} }
private void cleanIdleBuckets() { private void cleanIdleBuckets() {
@@ -62,7 +49,7 @@ public class RateLimiter {
} }
private Bucket createBucket() { private Bucket createBucket() {
var refill = Refill.greedy(1, Duration.ofSeconds(refillRate)); var refill = Refill.greedy(refillRate, Duration.ofSeconds(60));
var bw = Bandwidth.classic(capacity, refill); var bw = Bandwidth.classic(capacity, refill);
return Bucket.builder().addLimit(bw).build(); return Bucket.builder().addLimit(bw).build();
} }

View File

@@ -5,6 +5,7 @@
<Filters> <Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters> </Filters>
</Console> </Console>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz" <RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
@@ -13,9 +14,20 @@
<Filters> <Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters> </Filters>
<SizeBasedTriggeringPolicy size="10MB" /> <SizeBasedTriggeringPolicy size="10MB" />
</RollingFile> </RollingFile>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/crawler-audit-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/crawler-audit-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
ignoreExceptions="false">
<PatternLayout>
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS}: %msg{nolookups}%n</Pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100MB" />
<Filters>
<MarkerFilter marker="CRAWLER" onMatch="ALLOW" onMismatch="DENY" />
</Filters>
</RollingFile>
</Appenders> </Appenders>
<Loggers> <Loggers>
<Logger name="org.apache.zookeeper" level="WARN" /> <Logger name="org.apache.zookeeper" level="WARN" />

View File

@@ -5,6 +5,7 @@
<Filters> <Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters> </Filters>
</Console> </Console>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz" <RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
@@ -17,6 +18,17 @@
<MarkerFilter marker="PROCESS" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="PROCESS" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters>
</RollingFile>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/crawler-audit-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/crawler-audit-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
ignoreExceptions="false">
<PatternLayout>
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS}: %msg{nolookups}%n</Pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100MB" />
<Filters>
<MarkerFilter marker="CRAWLER" onMatch="ALLOW" onMismatch="DENY" />
</Filters> </Filters>
</RollingFile> </RollingFile>
</Appenders> </Appenders>

View File

@@ -25,7 +25,7 @@ import static org.mockito.Mockito.when;
class ZkServiceRegistryTest { class ZkServiceRegistryTest {
private static final int ZOOKEEPER_PORT = 2181; private static final int ZOOKEEPER_PORT = 2181;
private static final GenericContainer<?> zookeeper = private static final GenericContainer<?> zookeeper =
new GenericContainer<>("zookeeper:3.8.0") new GenericContainer<>("zookeeper:3.8")
.withExposedPorts(ZOOKEEPER_PORT); .withExposedPorts(ZOOKEEPER_PORT);
List<ZkServiceRegistry> registries = new ArrayList<>(); List<ZkServiceRegistry> registries = new ArrayList<>();

View File

@@ -20,6 +20,7 @@ public enum ExecutorActor {
EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
MIGRATE_CRAWL_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD), PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),
PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD), PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),

View File

@@ -66,6 +66,7 @@ public class ExecutorActorControlService {
DownloadSampleActor downloadSampleActor, DownloadSampleActor downloadSampleActor,
ScrapeFeedsActor scrapeFeedsActor, ScrapeFeedsActor scrapeFeedsActor,
ExecutorActorStateMachines stateMachines, ExecutorActorStateMachines stateMachines,
MigrateCrawlDataActor migrateCrawlDataActor,
ExportAllPrecessionActor exportAllPrecessionActor, ExportAllPrecessionActor exportAllPrecessionActor,
UpdateRssActor updateRssActor) throws SQLException { UpdateRssActor updateRssActor) throws SQLException {
this.messageQueueFactory = messageQueueFactory; this.messageQueueFactory = messageQueueFactory;
@@ -107,6 +108,8 @@ public class ExecutorActorControlService {
register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor); register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor);
register(ExecutorActor.UPDATE_RSS, updateRssActor); register(ExecutorActor.UPDATE_RSS, updateRssActor);
register(ExecutorActor.MIGRATE_CRAWL_DATA, migrateCrawlDataActor);
if (serviceConfiguration.node() == 1) { if (serviceConfiguration.node() == 1) {
register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor); register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor);
} }

View File

@@ -14,6 +14,8 @@ import nu.marginalia.mq.persistence.MqPersistence;
import nu.marginalia.nodecfg.NodeConfigurationService; import nu.marginalia.nodecfg.NodeConfigurationService;
import nu.marginalia.nodecfg.model.NodeProfile; import nu.marginalia.nodecfg.model.NodeProfile;
import nu.marginalia.service.module.ServiceConfiguration; import nu.marginalia.service.module.ServiceConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration; import java.time.Duration;
import java.time.LocalDateTime; import java.time.LocalDateTime;
@@ -29,6 +31,7 @@ public class UpdateRssActor extends RecordActorPrototype {
private final NodeConfigurationService nodeConfigurationService; private final NodeConfigurationService nodeConfigurationService;
private final MqPersistence persistence; private final MqPersistence persistence;
private static final Logger logger = LoggerFactory.getLogger(UpdateRssActor.class);
@Inject @Inject
public UpdateRssActor(Gson gson, public UpdateRssActor(Gson gson,
@@ -101,8 +104,8 @@ public class UpdateRssActor extends RecordActorPrototype {
case UpdateRefresh(int count, long msgId) -> { case UpdateRefresh(int count, long msgId) -> {
MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12)); MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12));
if (msg == null) { if (msg == null) {
// Retry the update logger.warn("UpdateRefresh is taking a very long time");
yield new Error("Failed to update feeds: message not found"); yield new UpdateRefresh(count, msgId);
} else if (msg.state() != MqMessageState.OK) { } else if (msg.state() != MqMessageState.OK) {
// Retry the update // Retry the update
yield new Error("Failed to update feeds: " + msg.state()); yield new Error("Failed to update feeds: " + msg.state());
@@ -119,8 +122,8 @@ public class UpdateRssActor extends RecordActorPrototype {
case UpdateClean(long msgId) -> { case UpdateClean(long msgId) -> {
MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12)); MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12));
if (msg == null) { if (msg == null) {
// Retry the update logger.warn("UpdateClean is taking a very long time");
yield new Error("Failed to update feeds: message not found"); yield new UpdateClean(msgId);
} else if (msg.state() != MqMessageState.OK) { } else if (msg.state() != MqMessageState.OK) {
// Retry the update // Retry the update
yield new Error("Failed to update feeds: " + msg.state()); yield new Error("Failed to update feeds: " + msg.state());

View File

@@ -0,0 +1,150 @@
package nu.marginalia.actor.task;
import com.google.gson.Gson;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import nu.marginalia.actor.prototype.RecordActorPrototype;
import nu.marginalia.actor.state.ActorStep;
import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.process.log.WorkLog;
import nu.marginalia.process.log.WorkLogEntry;
import nu.marginalia.service.control.ServiceHeartbeat;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId;
import org.apache.logging.log4j.util.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.util.Map;
import java.util.Optional;
import java.util.function.Function;
@Singleton
public class MigrateCrawlDataActor extends RecordActorPrototype {
private final FileStorageService fileStorageService;
private final ServiceHeartbeat serviceHeartbeat;
private static final Logger logger = LoggerFactory.getLogger(MigrateCrawlDataActor.class);
@Inject
public MigrateCrawlDataActor(Gson gson, FileStorageService fileStorageService, ServiceHeartbeat serviceHeartbeat) {
super(gson);
this.fileStorageService = fileStorageService;
this.serviceHeartbeat = serviceHeartbeat;
}
public record Run(long fileStorageId) implements ActorStep {}
@Override
public ActorStep transition(ActorStep self) throws Exception {
return switch (self) {
case Run(long fileStorageId) -> {
FileStorage storage = fileStorageService.getStorage(FileStorageId.of(fileStorageId));
Path root = storage.asPath();
Path crawlerLog = root.resolve("crawler.log");
Path newCrawlerLog = Files.createTempFile(root, "crawler", ".migrate.log");
int totalEntries = WorkLog.countEntries(crawlerLog);
try (WorkLog workLog = new WorkLog(newCrawlerLog);
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Migrating")
) {
int entryIdx = 0;
for (Map.Entry<WorkLogEntry, Path> item : WorkLog.iterableMap(crawlerLog, new CrawlDataLocator(root))) {
final WorkLogEntry entry = item.getKey();
final Path inputPath = item.getValue();
Path outputPath = inputPath;
heartbeat.progress("Migrating" + inputPath.getFileName(), entryIdx++, totalEntries);
if (inputPath.toString().endsWith(".parquet")) {
String domain = entry.id();
String id = Integer.toHexString(domain.hashCode());
outputPath = CrawlerOutputFile.createSlopPath(root, id, domain);
if (Files.exists(inputPath)) {
try {
SlopCrawlDataRecord.convertFromParquet(inputPath, outputPath);
Files.deleteIfExists(inputPath);
} catch (Exception ex) {
outputPath = inputPath; // don't update the work log on error
logger.error("Failed to convert " + inputPath, ex);
}
}
else if (!Files.exists(inputPath) && !Files.exists(outputPath)) {
// if the input file is missing, and the output file is missing, we just write the log
// record identical to the old one
outputPath = inputPath;
}
}
// Write a log entry for the (possibly) converted file
workLog.setJobToFinished(entry.id(), outputPath.toString(), entry.cnt());
}
}
Path oldCrawlerLog = Files.createTempFile(root, "crawler-", ".migrate.old.log");
Files.move(crawlerLog, oldCrawlerLog, StandardCopyOption.REPLACE_EXISTING);
Files.move(newCrawlerLog, crawlerLog);
yield new End();
}
default -> new Error();
};
}
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Map.Entry<WorkLogEntry, Path>>> {
private final Path crawlRootDir;
CrawlDataLocator(Path crawlRootDir) {
this.crawlRootDir = crawlRootDir;
}
@Override
public Optional<Map.Entry<WorkLogEntry, Path>> apply(WorkLogEntry entry) {
var path = getCrawledFilePath(crawlRootDir, entry.path());
if (!Files.exists(path)) {
return Optional.empty();
}
try {
return Optional.of(Map.entry(entry, path));
}
catch (Exception ex) {
return Optional.empty();
}
}
private Path getCrawledFilePath(Path crawlDir, String fileName) {
int sp = fileName.lastIndexOf('/');
// Normalize the filename
if (sp >= 0 && sp + 1< fileName.length())
fileName = fileName.substring(sp + 1);
if (fileName.length() < 4)
fileName = Strings.repeat("0", 4 - fileName.length()) + fileName;
String sp1 = fileName.substring(0, 2);
String sp2 = fileName.substring(2, 4);
return crawlDir.resolve(sp1).resolve(sp2).resolve(fileName);
}
}
@Override
public String describe() {
return "Migrates crawl data to the latest format";
}
}

View File

@@ -0,0 +1,47 @@
plugins {
id 'java'
id "com.google.protobuf" version "0.9.4"
id 'jvm-test-suite'
}
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(rootProject.ext.jvmVersion))
}
}
jar.archiveBaseName = 'favicon-api'
apply from: "$rootProject.projectDir/protobuf.gradle"
apply from: "$rootProject.projectDir/srcsets.gradle"
dependencies {
implementation project(':code:common:model')
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.notnull
implementation libs.guava
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation libs.gson
implementation libs.bundles.protobuf
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
exclude group: 'com.google.guava'
}
}
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
}

View File

@@ -0,0 +1,39 @@
package nu.marginalia.api.favicon;
import com.google.inject.Inject;
import nu.marginalia.service.client.GrpcChannelPoolFactory;
import nu.marginalia.service.client.GrpcMultiNodeChannelPool;
import nu.marginalia.service.discovery.property.ServiceKey;
import nu.marginalia.service.discovery.property.ServicePartition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Optional;
public class FaviconClient {
private static final Logger logger = LoggerFactory.getLogger(FaviconClient.class);
private final GrpcMultiNodeChannelPool<FaviconAPIGrpc.FaviconAPIBlockingStub> channelPool;
@Inject
public FaviconClient(GrpcChannelPoolFactory factory) {
this.channelPool = factory.createMulti(
ServiceKey.forGrpcApi(FaviconAPIGrpc.class, ServicePartition.multi()),
FaviconAPIGrpc::newBlockingStub);
}
public record FaviconData(byte[] bytes, String contentType) {}
public Optional<FaviconData> getFavicon(String domain, int node) {
RpcFaviconResponse rsp = channelPool.call(FaviconAPIGrpc.FaviconAPIBlockingStub::getFavicon)
.forNode(node)
.run(RpcFaviconRequest.newBuilder().setDomain(domain).build());
if (rsp.getData().isEmpty())
return Optional.empty();
return Optional.of(new FaviconData(rsp.getData().toByteArray(), rsp.getContentType()));
}
}

View File

@@ -0,0 +1,20 @@
syntax="proto3";
package marginalia.api.favicon;
option java_package="nu.marginalia.api.favicon";
option java_multiple_files=true;
service FaviconAPI {
/** Fetches information about a domain. */
rpc getFavicon(RpcFaviconRequest) returns (RpcFaviconResponse) {}
}
message RpcFaviconRequest {
string domain = 1;
}
message RpcFaviconResponse {
string domain = 1;
bytes data = 2;
string contentType = 3;
}

View File

@@ -0,0 +1,49 @@
plugins {
id 'java'
id 'application'
id 'jvm-test-suite'
}
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(rootProject.ext.jvmVersion))
}
}
apply from: "$rootProject.projectDir/srcsets.gradle"
dependencies {
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation project(':code:common:model')
implementation project(':code:common:db')
implementation project(':code:functions:favicon:api')
implementation project(':code:processes:crawling-process')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
exclude group: 'com.google.guava'
}
}
implementation libs.notnull
implementation libs.guava
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation dependencies.create(libs.spark.get()) {
exclude group: 'org.eclipse.jetty'
}
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
}

View File

@@ -0,0 +1,48 @@
package nu.marginalia.functions.favicon;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import com.google.protobuf.ByteString;
import io.grpc.stub.StreamObserver;
import nu.marginalia.api.favicon.FaviconAPIGrpc;
import nu.marginalia.api.favicon.RpcFaviconRequest;
import nu.marginalia.api.favicon.RpcFaviconResponse;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.service.server.DiscoverableService;
import java.util.Optional;
@Singleton
public class FaviconGrpcService extends FaviconAPIGrpc.FaviconAPIImplBase implements DiscoverableService {
private final DomainStateDb domainStateDb;
@Inject
public FaviconGrpcService(DomainStateDb domainStateDb) {
this.domainStateDb = domainStateDb;
}
public boolean shouldRegisterService() {
return domainStateDb.isAvailable();
}
@Override
public void getFavicon(RpcFaviconRequest request, StreamObserver<RpcFaviconResponse> responseObserver) {
Optional<DomainStateDb.FaviconRecord> icon = domainStateDb.getIcon(request.getDomain());
RpcFaviconResponse response;
if (icon.isEmpty()) {
response = RpcFaviconResponse.newBuilder().build();
}
else {
var iconRecord = icon.get();
response = RpcFaviconResponse.newBuilder()
.setContentType(iconRecord.contentType())
.setDomain(request.getDomain())
.setData(ByteString.copyFrom(iconRecord.imageData()))
.build();
}
responseObserver.onNext(response);
responseObserver.onCompleted();
}
}

View File

@@ -34,6 +34,7 @@ dependencies {
implementation libs.bundles.slf4j implementation libs.bundles.slf4j
implementation libs.commons.lang3 implementation libs.commons.lang3
implementation libs.commons.io implementation libs.commons.io
implementation libs.wiremock
implementation libs.prometheus implementation libs.prometheus
implementation libs.guava implementation libs.guava

View File

@@ -1,6 +1,7 @@
package nu.marginalia.livecapture; package nu.marginalia.livecapture;
import com.google.gson.Gson; import com.google.gson.Gson;
import nu.marginalia.WmsaHome;
import nu.marginalia.model.gson.GsonFactory; import nu.marginalia.model.gson.GsonFactory;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -12,6 +13,7 @@ import java.net.http.HttpRequest;
import java.net.http.HttpResponse; import java.net.http.HttpResponse;
import java.time.Duration; import java.time.Duration;
import java.util.Map; import java.util.Map;
import java.util.Optional;
/** Client for local browserless.io API */ /** Client for local browserless.io API */
public class BrowserlessClient implements AutoCloseable { public class BrowserlessClient implements AutoCloseable {
@@ -27,13 +29,16 @@ public class BrowserlessClient implements AutoCloseable {
private final URI browserlessURI; private final URI browserlessURI;
private final Gson gson = GsonFactory.get(); private final Gson gson = GsonFactory.get();
private final String userAgent = WmsaHome.getUserAgent().uaString();
public BrowserlessClient(URI browserlessURI) { public BrowserlessClient(URI browserlessURI) {
this.browserlessURI = browserlessURI; this.browserlessURI = browserlessURI;
} }
public String content(String url, GotoOptions gotoOptions) throws IOException, InterruptedException { public Optional<String> content(String url, GotoOptions gotoOptions) throws IOException, InterruptedException {
Map<String, Object> requestData = Map.of( Map<String, Object> requestData = Map.of(
"url", url, "url", url,
"userAgent", userAgent,
"gotoOptions", gotoOptions "gotoOptions", gotoOptions
); );
@@ -49,10 +54,10 @@ public class BrowserlessClient implements AutoCloseable {
if (rsp.statusCode() >= 300) { if (rsp.statusCode() >= 300) {
logger.info("Failed to fetch content for {}, status {}", url, rsp.statusCode()); logger.info("Failed to fetch content for {}, status {}", url, rsp.statusCode());
return null; return Optional.empty();
} }
return rsp.body(); return Optional.of(rsp.body());
} }
public byte[] screenshot(String url, GotoOptions gotoOptions, ScreenshotOptions screenshotOptions) public byte[] screenshot(String url, GotoOptions gotoOptions, ScreenshotOptions screenshotOptions)
@@ -60,6 +65,7 @@ public class BrowserlessClient implements AutoCloseable {
Map<String, Object> requestData = Map.of( Map<String, Object> requestData = Map.of(
"url", url, "url", url,
"userAgent", userAgent,
"options", screenshotOptions, "options", screenshotOptions,
"gotoOptions", gotoOptions "gotoOptions", gotoOptions
); );
@@ -84,7 +90,7 @@ public class BrowserlessClient implements AutoCloseable {
} }
@Override @Override
public void close() throws Exception { public void close() {
httpClient.shutdownNow(); httpClient.shutdownNow();
} }

View File

@@ -33,6 +33,7 @@ import java.sql.SQLException;
import java.time.*; import java.time.*;
import java.time.format.DateTimeFormatter; import java.time.format.DateTimeFormatter;
import java.util.*; import java.util.*;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors; import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicInteger;
@@ -71,7 +72,7 @@ public class FeedFetcherService {
public enum UpdateMode { public enum UpdateMode {
CLEAN, CLEAN,
REFRESH REFRESH
}; }
public void updateFeeds(UpdateMode updateMode) throws IOException { public void updateFeeds(UpdateMode updateMode) throws IOException {
if (updating) // Prevent concurrent updates if (updating) // Prevent concurrent updates
@@ -87,6 +88,7 @@ public class FeedFetcherService {
.followRedirects(HttpClient.Redirect.NORMAL) .followRedirects(HttpClient.Redirect.NORMAL)
.version(HttpClient.Version.HTTP_2) .version(HttpClient.Version.HTTP_2)
.build(); .build();
ExecutorService fetchExecutor = Executors.newCachedThreadPool();
FeedJournal feedJournal = FeedJournal.create(); FeedJournal feedJournal = FeedJournal.create();
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Update Rss Feeds") var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Update Rss Feeds")
) { ) {
@@ -131,7 +133,7 @@ public class FeedFetcherService {
FetchResult feedData; FetchResult feedData;
try (DomainLocks.DomainLock domainLock = domainLocks.lockDomain(new EdgeDomain(feed.domain()))) { try (DomainLocks.DomainLock domainLock = domainLocks.lockDomain(new EdgeDomain(feed.domain()))) {
feedData = fetchFeedData(feed, client, ifModifiedSinceDate, ifNoneMatchTag); feedData = fetchFeedData(feed, client, fetchExecutor, ifModifiedSinceDate, ifNoneMatchTag);
} catch (Exception ex) { } catch (Exception ex) {
feedData = new FetchResult.TransientError(); feedData = new FetchResult.TransientError();
} }
@@ -211,6 +213,7 @@ public class FeedFetcherService {
private FetchResult fetchFeedData(FeedDefinition feed, private FetchResult fetchFeedData(FeedDefinition feed,
HttpClient client, HttpClient client,
ExecutorService executorService,
@Nullable String ifModifiedSinceDate, @Nullable String ifModifiedSinceDate,
@Nullable String ifNoneMatchTag) @Nullable String ifNoneMatchTag)
{ {
@@ -237,7 +240,14 @@ public class FeedFetcherService {
HttpRequest getRequest = requestBuilder.build(); HttpRequest getRequest = requestBuilder.build();
for (int i = 0; i < 3; i++) { for (int i = 0; i < 3; i++) {
HttpResponse<byte[]> rs = client.send(getRequest, HttpResponse.BodyHandlers.ofByteArray());
/* Note we need to use an executor to time-limit the send() method in HttpClient, as
* its support for timeouts only applies to the time until response starts to be received,
* and does not catch the case when the server starts to send data but then hangs.
*/
HttpResponse<byte[]> rs = executorService.submit(
() -> client.send(getRequest, HttpResponse.BodyHandlers.ofByteArray()))
.get(15, TimeUnit.SECONDS);
if (rs.statusCode() == 429) { // Too Many Requests if (rs.statusCode() == 429) { // Too Many Requests
int retryAfter = Integer.parseInt(rs.headers().firstValue("Retry-After").orElse("2")); int retryAfter = Integer.parseInt(rs.headers().firstValue("Retry-After").orElse("2"));

View File

@@ -1,5 +1,9 @@
package nu.marginalia.livecapture; package nu.marginalia.livecapture;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.WmsaHome;
import nu.marginalia.service.module.ServiceConfigurationModule;
import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll; import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag; import org.junit.jupiter.api.Tag;
@@ -8,34 +12,86 @@ import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Testcontainers; import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName; import org.testcontainers.utility.DockerImageName;
import java.io.IOException;
import java.net.URI; import java.net.URI;
import java.util.Map; import java.util.Map;
import static com.github.tomakehurst.wiremock.client.WireMock.*;
@Testcontainers @Testcontainers
@Tag("slow") @Tag("slow")
public class BrowserlessClientTest { public class BrowserlessClientTest {
static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome")) static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome"))
.withEnv(Map.of("TOKEN", "BROWSERLESS_TOKEN")) .withEnv(Map.of("TOKEN", "BROWSERLESS_TOKEN"))
.withNetworkMode("bridge")
.withExposedPorts(3000); .withExposedPorts(3000);
static WireMockServer wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
static String localIp;
static URI browserlessURI;
@BeforeAll @BeforeAll
public static void setup() { public static void setup() throws IOException {
container.start(); container.start();
browserlessURI = URI.create(String.format("http://%s:%d/",
container.getHost(),
container.getMappedPort(3000))
);
wireMockServer.start();
wireMockServer.stubFor(get("/").willReturn(aResponse().withStatus(200).withBody("Ok")));
localIp = ServiceConfigurationModule.getLocalNetworkIP();
}
@Tag("flaky")
@Test
public void testInspectContentUA__Flaky() throws Exception {
try (var client = new BrowserlessClient(browserlessURI)) {
client.content("http://" + localIp + ":18089/",
BrowserlessClient.GotoOptions.defaultValues()
);
}
wireMockServer.verify(getRequestedFor(urlEqualTo("/")).withHeader("User-Agent", equalTo(WmsaHome.getUserAgent().uaString())));
}
@Tag("flaky")
@Test
public void testInspectScreenshotUA__Flaky() throws Exception {
try (var client = new BrowserlessClient(browserlessURI)) {
client.screenshot("http://" + localIp + ":18089/",
BrowserlessClient.GotoOptions.defaultValues(),
BrowserlessClient.ScreenshotOptions.defaultValues()
);
}
wireMockServer.verify(getRequestedFor(urlEqualTo("/")).withHeader("User-Agent", equalTo(WmsaHome.getUserAgent().uaString())));
} }
@Test @Test
public void testContent() throws Exception { public void testContent() throws Exception {
try (var client = new BrowserlessClient(URI.create("http://" + container.getHost() + ":" + container.getMappedPort(3000)))) { try (var client = new BrowserlessClient(browserlessURI)) {
var content = client.content("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues()); var content = client.content("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues()).orElseThrow();
Assertions.assertNotNull(content, "Content should not be null");
Assertions.assertFalse(content.isBlank(), "Content should not be empty"); Assertions.assertFalse(content.isBlank(), "Content should not be empty");
} }
} }
@Test @Test
public void testScreenshot() throws Exception { public void testScreenshot() throws Exception {
try (var client = new BrowserlessClient(URI.create("http://" + container.getHost() + ":" + container.getMappedPort(3000)))) { try (var client = new BrowserlessClient(browserlessURI)) {
var screenshot = client.screenshot("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues(), BrowserlessClient.ScreenshotOptions.defaultValues()); var screenshot = client.screenshot("https://www.marginalia.nu/",
BrowserlessClient.GotoOptions.defaultValues(),
BrowserlessClient.ScreenshotOptions.defaultValues());
Assertions.assertNotNull(screenshot, "Screenshot should not be null"); Assertions.assertNotNull(screenshot, "Screenshot should not be null");
} }
} }

View File

@@ -134,6 +134,10 @@ public class QueryExpansion {
if (scoreCombo > scoreA + scoreB || scoreCombo > 1000) { if (scoreCombo > scoreA + scoreB || scoreCombo > 1000) {
graph.addVariantForSpan(prev, qw, joinedWord); graph.addVariantForSpan(prev, qw, joinedWord);
} }
else if (StringUtils.isAlpha(prev.word()) && StringUtils.isNumeric(qw.word())) { // join e.g. trs 80 to trs80 and trs-80
graph.addVariantForSpan(prev, qw, prev.word() + qw.word());
graph.addVariantForSpan(prev, qw, prev.word() + "-" + qw.word());
}
} }
prev = qw; prev = qw;

View File

@@ -213,6 +213,18 @@ public class QueryFactoryTest {
System.out.println(subquery); System.out.println(subquery);
} }
@Test
public void testContractionWordNum() {
var subquery = parseAndGetSpecs("glove 80");
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" 80 "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove-80 "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove80 "));
}
@Test @Test
public void testCplusPlus() { public void testCplusPlus() {
var subquery = parseAndGetSpecs("std::vector::push_back vector"); var subquery = parseAndGetSpecs("std::vector::push_back vector");

View File

@@ -16,20 +16,19 @@ import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Comparator; import java.util.Comparator;
import java.util.Iterator;
import java.util.List; import java.util.List;
import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService; import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors; import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
import static java.lang.Math.clamp; import java.util.function.Consumer;
@Singleton @Singleton
public class IndexClient { public class IndexClient {
private static final Logger logger = LoggerFactory.getLogger(IndexClient.class); private static final Logger logger = LoggerFactory.getLogger(IndexClient.class);
private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool; private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool;
private final DomainBlacklistImpl blacklist; private final DomainBlacklistImpl blacklist;
private static final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); private static final ExecutorService executor = Executors.newCachedThreadPool();
@Inject @Inject
public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) { public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) {
@@ -51,40 +50,37 @@ public class IndexClient {
/** Execute a query on the index partitions and return the combined results. */ /** Execute a query on the index partitions and return the combined results. */
public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) { public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) {
List<CompletableFuture<Iterator<RpcDecoratedResultItem>>> futures =
channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
.async(executor)
.runEach(indexRequest);
final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal(); final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal();
final int resultsUpperBound = requestedMaxResults * channelPool.getNumNodes();
List<RpcDecoratedResultItem> results = new ArrayList<>(resultsUpperBound); AtomicInteger totalNumResults = new AtomicInteger(0);
for (var future : futures) { List<RpcDecoratedResultItem> results =
try { channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
future.get().forEachRemaining(results::add); .async(executor)
} .runEach(indexRequest)
catch (Exception e) { .stream()
logger.error("Downstream exception", e); .map(future -> future.thenApply(iterator -> {
} List<RpcDecoratedResultItem> ret = new ArrayList<>(requestedMaxResults);
} iterator.forEachRemaining(ret::add);
totalNumResults.addAndGet(ret.size());
return ret;
}))
.mapMulti((CompletableFuture<List<RpcDecoratedResultItem>> fut, Consumer<List<RpcDecoratedResultItem>> c) ->{
try {
c.accept(fut.join());
} catch (Exception e) {
logger.error("Error while fetching results", e);
}
})
.flatMap(List::stream)
.filter(item -> !isBlacklisted(item))
.sorted(comparator)
.skip(Math.max(0, (pagination.page - 1) * pagination.pageSize))
.limit(pagination.pageSize)
.toList();
// Sort the results by ranking score and remove blacklisted domains return new AggregateQueryResponse(results, pagination.page(), totalNumResults.get());
results.sort(comparator);
results.removeIf(this::isBlacklisted);
int numReceivedResults = results.size();
// pagination is typically 1-indexed, so we need to adjust the start and end indices
int indexStart = (pagination.page - 1) * pagination.pageSize;
int indexEnd = (pagination.page) * pagination.pageSize;
results = results.subList(
clamp(indexStart, 0, Math.max(0, results.size() - 1)), // from is inclusive, so subtract 1 from size()
clamp(indexEnd, 0, results.size()));
return new AggregateQueryResponse(results, pagination.page(), numReceivedResults);
} }
private boolean isBlacklisted(RpcDecoratedResultItem item) { private boolean isBlacklisted(RpcDecoratedResultItem item) {

View File

@@ -0,0 +1,119 @@
package nu.marginalia.index.results;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import gnu.trove.map.hash.TIntDoubleHashMap;
import nu.marginalia.WmsaHome;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import java.util.OptionalInt;
import java.util.concurrent.TimeUnit;
@Singleton
public class DomainRankingOverrides {
private final DbDomainQueries domainQueries;
private volatile TIntDoubleHashMap rankingFactors = new TIntDoubleHashMap(100, 0.75f, -1, 1.);
private static final Logger logger = LoggerFactory.getLogger(DomainRankingOverrides.class);
private final Path overrideFilePath;
@Inject
public DomainRankingOverrides(DbDomainQueries domainQueries) {
this.domainQueries = domainQueries;
overrideFilePath = WmsaHome.getDataPath().resolve("domain-ranking-factors.txt");
Thread.ofPlatform().start(this::updateRunner);
}
// for test access
public DomainRankingOverrides(DbDomainQueries domainQueries, Path overrideFilePath)
{
this.domainQueries = domainQueries;
this.overrideFilePath = overrideFilePath;
}
public double getRankingFactor(int domainId) {
return rankingFactors.get(domainId);
}
private void updateRunner() {
for (;;) {
reloadFile();
try {
TimeUnit.MINUTES.sleep(5);
} catch (InterruptedException ex) {
logger.warn("Thread interrupted", ex);
break;
}
}
}
void reloadFile() {
if (!Files.exists(overrideFilePath)) {
return;
}
try {
List<String> lines = Files.readAllLines(overrideFilePath);
double factor = 1.;
var newRankingFactors = new TIntDoubleHashMap(lines.size(), 0.75f, -1, 1.);
for (var line : lines) {
if (line.isBlank()) continue;
if (line.startsWith("#")) continue;
String[] parts = line.split("\\s+");
if (parts.length != 2) {
logger.warn("Unrecognized format for domain overrides file: {}", line);
continue;
}
try {
switch (parts[0]) {
case "value" -> {
// error handle me
factor = Double.parseDouble(parts[1]);
if (factor < 0) {
logger.error("Negative values are not permitted, found {}", factor);
factor = 1;
}
}
case "domain" -> {
// error handle
OptionalInt domainId = domainQueries.tryGetDomainId(new EdgeDomain(parts[1]));
if (domainId.isPresent()) {
newRankingFactors.put(domainId.getAsInt(), factor);
}
else {
logger.warn("Unrecognized domain id {}", parts[1]);
}
}
default -> {
logger.warn("Unrecognized format {}", line);
}
}
} catch (Exception ex) {
logger.warn("Error in parsing domain overrides file: {} ({})", line, ex.getClass().getSimpleName());
}
}
rankingFactors = newRankingFactors;
} catch (IOException ex) {
logger.error("Failed to read " + overrideFilePath, ex);
}
}
}

View File

@@ -40,13 +40,16 @@ public class IndexResultRankingService {
private final DocumentDbReader documentDbReader; private final DocumentDbReader documentDbReader;
private final StatefulIndex statefulIndex; private final StatefulIndex statefulIndex;
private final DomainRankingOverrides domainRankingOverrides;
@Inject @Inject
public IndexResultRankingService(DocumentDbReader documentDbReader, public IndexResultRankingService(DocumentDbReader documentDbReader,
StatefulIndex statefulIndex) StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides)
{ {
this.documentDbReader = documentDbReader; this.documentDbReader = documentDbReader;
this.statefulIndex = statefulIndex; this.statefulIndex = statefulIndex;
this.domainRankingOverrides = domainRankingOverrides;
} }
public List<SearchResultItem> rankResults(SearchParameters params, public List<SearchResultItem> rankResults(SearchParameters params,
@@ -57,7 +60,7 @@ public class IndexResultRankingService {
if (resultIds.isEmpty()) if (resultIds.isEmpty())
return List.of(); return List.of();
IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, rankingContext, params); IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, domainRankingOverrides, rankingContext, params);
List<SearchResultItem> results = new ArrayList<>(resultIds.size()); List<SearchResultItem> results = new ArrayList<>(resultIds.size());

View File

@@ -41,14 +41,17 @@ public class IndexResultScoreCalculator {
private final CombinedIndexReader index; private final CombinedIndexReader index;
private final QueryParams queryParams; private final QueryParams queryParams;
private final DomainRankingOverrides domainRankingOverrides;
private final ResultRankingContext rankingContext; private final ResultRankingContext rankingContext;
private final CompiledQuery<String> compiledQuery; private final CompiledQuery<String> compiledQuery;
public IndexResultScoreCalculator(StatefulIndex statefulIndex, public IndexResultScoreCalculator(StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides,
ResultRankingContext rankingContext, ResultRankingContext rankingContext,
SearchParameters params) SearchParameters params)
{ {
this.index = statefulIndex.get(); this.index = statefulIndex.get();
this.domainRankingOverrides = domainRankingOverrides;
this.rankingContext = rankingContext; this.rankingContext = rankingContext;
this.queryParams = params.queryParams; this.queryParams = params.queryParams;
@@ -127,10 +130,10 @@ public class IndexResultScoreCalculator {
* wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.getBm25K(), wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext)) * wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.getBm25K(), wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext))
/ (Math.sqrt(unorderedMatches.searchableKeywordCount + 1)); / (Math.sqrt(unorderedMatches.searchableKeywordCount + 1));
double rankingAdjustment = domainRankingOverrides.getRankingFactor(UrlIdCodec.getDomainId(combinedId));
double score = normalize( double score = normalize(
score_firstPosition + score_proximity + score_verbatim rankingAdjustment * (score_firstPosition + score_proximity + score_verbatim + score_bM25 + score_bFlags),
+ score_bM25
+ score_bFlags,
-Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0 -Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0
); );
@@ -580,3 +583,4 @@ public class IndexResultScoreCalculator {
} }
} }

View File

@@ -0,0 +1,103 @@
package nu.marginalia.index.results;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.test.TestMigrationLoader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.parallel.Execution;
import org.junit.jupiter.api.parallel.ExecutionMode;
import org.testcontainers.containers.MariaDBContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.sql.SQLException;
@Testcontainers
@Execution(ExecutionMode.SAME_THREAD)
@Tag("slow")
class DomainRankingOverridesTest {
@Container
static MariaDBContainer<?> mariaDBContainer = new MariaDBContainer<>("mariadb")
.withDatabaseName("WMSA_prod")
.withUsername("wmsa")
.withPassword("wmsa")
.withNetworkAliases("mariadb");
private static DbDomainQueries domainQueries;
@BeforeAll
public static void setup() throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(mariaDBContainer.getJdbcUrl());
config.setUsername("wmsa");
config.setPassword("wmsa");
var dataSource = new HikariDataSource(config);
TestMigrationLoader.flywayMigration(dataSource);
try (var conn = dataSource.getConnection();
var stmt = conn.createStatement()) {
stmt.executeQuery("DELETE FROM EC_DOMAIN"); // Wipe any old state from other test runs
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('first.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('second.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('third.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('not-added.example.com', 'example.com', 1)");
}
domainQueries = new DbDomainQueries(dataSource);
}
@Test
public void test() throws IOException {
Path overridesFile = Files.createTempFile(getClass().getSimpleName(), ".txt");
try {
Files.writeString(overridesFile, """
# A comment
value 0.75
domain first.example.com
domain second.example.com
value 1.1
domain third.example.com
""",
StandardOpenOption.APPEND);
var overrides = new DomainRankingOverrides(domainQueries, overridesFile);
overrides.reloadFile();
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("first.example.com"))
));
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("second.example.com"))
));
Assertions.assertEquals(1.1, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("third.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("not-added.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(1<<23));
}
finally {
Files.deleteIfExists(overridesFile);
}
}
}

View File

@@ -23,16 +23,33 @@ public class SimpleBlockingThreadPool {
private final Logger logger = LoggerFactory.getLogger(SimpleBlockingThreadPool.class); private final Logger logger = LoggerFactory.getLogger(SimpleBlockingThreadPool.class);
public SimpleBlockingThreadPool(String name, int poolSize, int queueSize) { public SimpleBlockingThreadPool(String name, int poolSize, int queueSize) {
this(name, poolSize, queueSize, ThreadType.PLATFORM);
}
public SimpleBlockingThreadPool(String name, int poolSize, int queueSize, ThreadType threadType) {
tasks = new ArrayBlockingQueue<>(queueSize); tasks = new ArrayBlockingQueue<>(queueSize);
for (int i = 0; i < poolSize; i++) { for (int i = 0; i < poolSize; i++) {
Thread worker = new Thread(this::worker, name + "[" + i + "]");
worker.setDaemon(true); Thread.Builder threadBuilder = switch (threadType) {
worker.start(); case VIRTUAL -> Thread.ofVirtual();
case PLATFORM -> Thread.ofPlatform().daemon(true);
};
Thread worker = threadBuilder
.name(name + "[" + i + "]")
.start(this::worker);
workers.add(worker); workers.add(worker);
} }
} }
public enum ThreadType {
VIRTUAL,
PLATFORM
}
public void submit(Task task) throws InterruptedException { public void submit(Task task) throws InterruptedException {
tasks.put(task); tasks.put(task);
} }

View File

@@ -45,6 +45,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
dataColumn.openUnregistered(uri, page), dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
dataReader.skip(toSkip); dataReader.skip(toSkip);
} }
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override @Override
public boolean hasRemaining() throws IOException { public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining(); return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
Storage.reader(uri, this, page, false), Storage.reader(uri, this, page, false),
@@ -96,6 +101,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
this.indexReader = indexReader; this.indexReader = indexReader;
} }
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override @Override
public AbstractColumn<?, ?> columnDesc() { public AbstractColumn<?, ?> columnDesc() {
return GammaCodedSequenceColumn.this; return GammaCodedSequenceColumn.this;

View File

@@ -45,6 +45,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
); );
} }
@Override
public int alignmentSize() {
return 0;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
dataColumn.openUnregistered(uri, page), dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
dataReader.skip(toSkip); dataReader.skip(toSkip);
} }
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override @Override
public boolean hasRemaining() throws IOException { public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining(); return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
Storage.reader(uri, this, page, false), Storage.reader(uri, this, page, false),
@@ -101,6 +106,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
return VarintCodedSequenceColumn.this; return VarintCodedSequenceColumn.this;
} }
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override @Override
public void skip(long positions) throws IOException { public void skip(long positions) throws IOException {
for (int i = 0; i < positions; i++) { for (int i = 0; i < positions; i++) {

View File

@@ -155,8 +155,15 @@ public class SentenceExtractor {
public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) { public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) {
String[] sentences; String[] sentences;
// Normalize spaces // Safety net against malformed data DOS attacks,
// found 5+ MB <p>-tags in the wild that just break
// the sentence extractor causing it to stall forever.
if (text.length() > 50_000) {
// 50k chars can hold a small novel, let alone single html tags
text = text.substring(0, 50_000);
}
// Normalize spaces
text = normalizeSpaces(text); text = normalizeSpaces(text);
// Split into sentences // Split into sentences

View File

@@ -5,9 +5,7 @@ import nu.marginalia.actor.state.*;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.util.*;
import java.util.Arrays;
import java.util.List;
public abstract class RecordActorPrototype implements ActorPrototype { public abstract class RecordActorPrototype implements ActorPrototype {
@@ -118,7 +116,7 @@ public abstract class RecordActorPrototype implements ActorPrototype {
} }
private String functionName(Class<? extends ActorStep> functionClass) { private String functionName(Class<? extends ActorStep> functionClass) {
return functionClass.getSimpleName().toUpperCase(); return ActorStep.functionName(functionClass);
} }
private ActorStep constructState(String message) throws ReflectiveOperationException { private ActorStep constructState(String message) throws ReflectiveOperationException {
@@ -145,4 +143,43 @@ public abstract class RecordActorPrototype implements ActorPrototype {
} }
} }
/** Get a list of JSON prototypes for each actor step declared by this actor */
@SuppressWarnings("unchecked")
public Map<String, String> getMessagePrototypes() {
Map<String, String> messagePrototypes = new HashMap<>();
for (var clazz : getClass().getDeclaredClasses()) {
if (!clazz.isRecord() || !ActorStep.class.isAssignableFrom(clazz))
continue;
StringJoiner sj = new StringJoiner(",\n\t", "{\n\t", "\n}");
renderToJsonPrototype(sj, (Class<? extends Record>) clazz);
messagePrototypes.put(ActorStep.functionName((Class<? extends ActorStep>) clazz), sj.toString());
}
return messagePrototypes;
}
@SuppressWarnings("unchecked")
private void renderToJsonPrototype(StringJoiner sj, Class<? extends Record> recordType) {
for (var field : recordType.getDeclaredFields()) {
String typeName = field.getType().getSimpleName();
if ("List".equals(typeName)) {
sj.add(String.format("\"%s\": [ ]", field.getName()));
}
else if (field.getType().isRecord()) {
var innerSj = new StringJoiner(",", "{", "}");
renderToJsonPrototype(innerSj, (Class<? extends Record>) field.getType());
sj.add(String.format("\"%s\": %s", field.getName(), sj));
}
else {
sj.add(String.format("\"%s\": \"%s\"", field.getName(), typeName));
}
}
}
} }

View File

@@ -1,3 +1,7 @@
package nu.marginalia.actor.state; package nu.marginalia.actor.state;
public interface ActorStep {} public interface ActorStep {
static String functionName(Class<? extends ActorStep> type) {
return type.getSimpleName().toUpperCase();
}
}

View File

@@ -87,6 +87,8 @@ dependencies {
implementation libs.commons.compress implementation libs.commons.compress
implementation libs.sqlite implementation libs.sqlite
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit testImplementation libs.bundles.junit
testImplementation libs.mockito testImplementation libs.mockito

View File

@@ -12,7 +12,6 @@ import nu.marginalia.converting.sideload.SideloadSourceFactory;
import nu.marginalia.converting.writer.ConverterBatchWritableIf; import nu.marginalia.converting.writer.ConverterBatchWritableIf;
import nu.marginalia.converting.writer.ConverterBatchWriter; import nu.marginalia.converting.writer.ConverterBatchWriter;
import nu.marginalia.converting.writer.ConverterWriter; import nu.marginalia.converting.writer.ConverterWriter;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.mq.MessageQueueFactory; import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.mqapi.converting.ConvertRequest; import nu.marginalia.mqapi.converting.ConvertRequest;
@@ -36,6 +35,7 @@ import java.io.IOException;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.sql.SQLException; import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collection; import java.util.Collection;
import java.util.List; import java.util.List;
import java.util.Optional; import java.util.Optional;
@@ -51,6 +51,7 @@ public class ConverterMain extends ProcessMainClass {
private final ProcessHeartbeat heartbeat; private final ProcessHeartbeat heartbeat;
private final FileStorageService fileStorageService; private final FileStorageService fileStorageService;
private final SideloadSourceFactory sideloadSourceFactory; private final SideloadSourceFactory sideloadSourceFactory;
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
public static void main(String... args) throws Exception { public static void main(String... args) throws Exception {
@@ -201,12 +202,26 @@ public class ConverterMain extends ProcessMainClass {
processedDomains.set(batchingWorkLog.size()); processedDomains.set(batchingWorkLog.size());
heartbeat.setProgress(processedDomains.get() / (double) totalDomains); heartbeat.setProgress(processedDomains.get() / (double) totalDomains);
for (var domain : WorkLog.iterableMap(crawlDir.getLogFile(), logger.info("Processing small items");
// We separate the large and small domains to reduce the number of critical sections,
// as the large domains have a separate processing track that doesn't store everything
// in memory
final List<Path> bigTasks = new ArrayList<>();
// First process the small items
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog))) new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{ {
if (SerializableCrawlDataStream.getSizeHint(dataPath) >= SIDELOAD_THRESHOLD) {
bigTasks.add(dataPath);
continue;
}
pool.submit(() -> { pool.submit(() -> {
try { try (var dataStream = SerializableCrawlDataStream.openDataStream(dataPath)) {
ConverterBatchWritableIf writable = processor.createWritable(domain); ConverterBatchWritableIf writable = processor.fullProcessing(dataStream) ;
converterWriter.accept(writable); converterWriter.accept(writable);
} }
catch (Exception ex) { catch (Exception ex) {
@@ -225,10 +240,39 @@ public class ConverterMain extends ProcessMainClass {
do { do {
System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining"); System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining");
} while (!pool.awaitTermination(60, TimeUnit.SECONDS)); } while (!pool.awaitTermination(60, TimeUnit.SECONDS));
logger.info("Processing large items");
try (var hb = heartbeat.createAdHocTaskHeartbeat("Large Domains")) {
int bigTaskIdx = 0;
// Next the big items domain-by-domain
for (var dataPath : bigTasks) {
hb.progress(dataPath.toFile().getName(), bigTaskIdx++, bigTasks.size());
try {
// SerializableCrawlDataStream is autocloseable, we can't try-with-resources because then it will be
// closed before it's consumed by the converterWriter. Instead, the converterWriter guarantees it
// will close it after it's consumed.
var stream = SerializableCrawlDataStream.openDataStream(dataPath);
ConverterBatchWritableIf writable = processor.simpleProcessing(stream, SerializableCrawlDataStream.getSizeHint(dataPath));
converterWriter.accept(writable);
}
catch (Exception ex) {
logger.info("Error in processing", ex);
}
finally {
heartbeat.setProgress(processedDomains.incrementAndGet() / (double) totalDomains);
}
}
}
logger.info("Processing complete");
} }
} }
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<SerializableCrawlDataStream>> { private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Path>> {
private final Path crawlRootDir; private final Path crawlRootDir;
private final BatchingWorkLog batchingWorkLog; private final BatchingWorkLog batchingWorkLog;
@@ -239,7 +283,7 @@ public class ConverterMain extends ProcessMainClass {
} }
@Override @Override
public Optional<SerializableCrawlDataStream> apply(WorkLogEntry entry) { public Optional<Path> apply(WorkLogEntry entry) {
if (batchingWorkLog.isItemProcessed(entry.id())) { if (batchingWorkLog.isItemProcessed(entry.id())) {
return Optional.empty(); return Optional.empty();
} }
@@ -252,7 +296,7 @@ public class ConverterMain extends ProcessMainClass {
} }
try { try {
return Optional.of(CrawledDomainReader.createDataStream(path)); return Optional.of(path);
} }
catch (Exception ex) { catch (Exception ex) {
return Optional.empty(); return Optional.empty();

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.WordFlags;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
@@ -91,7 +92,7 @@ public class DocumentProcessor {
DocumentClass documentClass, DocumentClass documentClass,
DocumentDecorator documentDecorator, DocumentDecorator documentDecorator,
DomainLinks externalDomainLinks, DomainLinks externalDomainLinks,
ProcessedDocument ret) throws URISyntaxException, DisqualifiedException ProcessedDocument ret) throws URISyntaxException, IOException, DisqualifiedException
{ {
var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus); var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus);
@@ -109,7 +110,7 @@ public class DocumentProcessor {
ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus); ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus);
final var plugin = findPlugin(crawledDocument); AbstractDocumentProcessorPlugin plugin = findPlugin(crawledDocument);
EdgeUrl url = new EdgeUrl(crawledDocument.url); EdgeUrl url = new EdgeUrl(crawledDocument.url);
LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url); LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url);

View File

@@ -32,7 +32,6 @@ import java.util.*;
import java.util.regex.Pattern; import java.util.regex.Pattern;
public class DomainProcessor { public class DomainProcessor {
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
private final DocumentProcessor documentProcessor; private final DocumentProcessor documentProcessor;
private final SiteWords siteWords; private final SiteWords siteWords;
private final AnchorTagsSource anchorTagsSource; private final AnchorTagsSource anchorTagsSource;
@@ -54,21 +53,9 @@ public class DomainProcessor {
geoIpDictionary.waitReady(); geoIpDictionary.waitReady();
} }
public ConverterBatchWritableIf createWritable(SerializableCrawlDataStream domain) { public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
final int sizeHint = domain.sizeHint();
if (sizeHint > SIDELOAD_THRESHOLD) {
// If the file is too big, we run a processing mode that doesn't
// require loading the entire dataset into RAM
return sideloadProcessing(domain, sizeHint);
}
return fullProcessing(domain);
}
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
try { try {
return new SideloadProcessing(dataStream, sizeHint, extraKeywords); return new SimpleProcessing(dataStream, sizeHint, extraKeywords);
} }
catch (Exception ex) { catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex); logger.warn("Failed to process domain sideload", ex);
@@ -76,9 +63,9 @@ public class DomainProcessor {
} }
} }
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) { public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) {
try { try {
return new SideloadProcessing(dataStream, sizeHint); return new SimpleProcessing(dataStream, sizeHint);
} }
catch (Exception ex) { catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex); logger.warn("Failed to process domain sideload", ex);
@@ -86,22 +73,84 @@ public class DomainProcessor {
} }
} }
public class SideloadProcessing implements ConverterBatchWritableIf, SideloadSource { @Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBodyBytes.length == 0)
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
/** The simple processing track processes documents individually, and does not perform any domain-level analysis.
* This is needed to process extremely large domains, which would otherwise eat up too much RAM.
*/
public class SimpleProcessing implements ConverterBatchWritableIf, SideloadSource {
private final SerializableCrawlDataStream dataStream; private final SerializableCrawlDataStream dataStream;
private final ProcessedDomain domain; private final ProcessedDomain domain;
private final DocumentDecorator documentDecorator; private final DocumentDecorator documentDecorator;
private final Set<String> processedUrls = new HashSet<>(); private final Set<String> processedUrls = new HashSet<>();
private final DomainLinks externalDomainLinks; private final DomainLinks externalDomainLinks;
private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator(); private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator();
private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8, private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8,
Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors()) Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors())
); );
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException { SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException {
this(dataStream, sizeHint, List.of()); this(dataStream, sizeHint, List.of());
} }
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException { SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException {
this.dataStream = dataStream; this.dataStream = dataStream;
if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain)) if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain))
@@ -128,6 +177,7 @@ public class DomainProcessor {
@Override @Override
public Iterator<ProcessedDocument> getDocumentsStream() { public Iterator<ProcessedDocument> getDocumentsStream() {
return iteratorFactory.create((taskConsumer) -> { return iteratorFactory.create((taskConsumer) -> {
while (dataStream.hasNext()) while (dataStream.hasNext())
{ {
if (!(dataStream.next() instanceof CrawledDocument doc)) if (!(dataStream.next() instanceof CrawledDocument doc))
@@ -172,65 +222,6 @@ public class DomainProcessor {
} }
} }
@Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBody.isBlank())
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
private void processDomain(CrawledDomain crawledDomain, private void processDomain(CrawledDomain crawledDomain,
ProcessedDomain domain, ProcessedDomain domain,
DocumentDecorator decorator) DocumentDecorator decorator)

View File

@@ -116,7 +116,7 @@ public class AdblockSimulator {
// Refrain from cleaning up this code, it's very hot code and needs to be fast. // Refrain from cleaning up this code, it's very hot code and needs to be fast.
// This version is about 100x faster than the a "clean" first stab implementation. // This version is about 100x faster than a "clean" first stab implementation.
class RuleVisitor implements NodeFilter { class RuleVisitor implements NodeFilter {
public boolean sawAds; public boolean sawAds;

View File

@@ -23,7 +23,7 @@ public class DocumentGeneratorExtractor {
var tags = doc.select("meta[name=generator]"); var tags = doc.select("meta[name=generator]");
if (tags.size() == 0) { if (tags.isEmpty()) {
// Some sites have a comment in the head instead of a meta tag // Some sites have a comment in the head instead of a meta tag
return fingerprintServerTech(doc, responseHeaders); return fingerprintServerTech(doc, responseHeaders);
} }

View File

@@ -24,7 +24,7 @@ public class DocumentValuator {
double scriptPenalty = getScriptPenalty(parsedDocument); double scriptPenalty = getScriptPenalty(parsedDocument);
double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument); double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument);
int rawLength = crawledDocument.documentBody.length(); int rawLength = crawledDocument.documentBodyBytes.length;
if (textLength == 0) { if (textLength == 0) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH); throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH);

View File

@@ -218,7 +218,10 @@ public class FeatureExtractor {
} }
} }
if (features.contains(HtmlFeature.JS) && adblockSimulator.hasAds(doc.clone())) { if (features.contains(HtmlFeature.JS)
// remove while disabled to get rid of expensive clone() call:
// adblockSimulator.hasAds(doc.clone())
) {
features.add(HtmlFeature.ADVERTISEMENT); features.add(HtmlFeature.ADVERTISEMENT);
} }

View File

@@ -14,6 +14,7 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard; import nu.marginalia.model.html.HtmlStandard;
import javax.annotation.Nullable; import javax.annotation.Nullable;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
@@ -25,7 +26,7 @@ public abstract class AbstractDocumentProcessorPlugin {
this.languageFilter = languageFilter; this.languageFilter = languageFilter;
} }
public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException; public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException, IOException;
public abstract boolean isApplicable(CrawledDocument doc); public abstract boolean isApplicable(CrawledDocument doc);
protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException { protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException {
@@ -86,6 +87,7 @@ public abstract class AbstractDocumentProcessorPlugin {
return this; return this;
} }
public MetaTagsBuilder addPubDate(PubDate pubDate) { public MetaTagsBuilder addPubDate(PubDate pubDate) {
if (pubDate.year() > 1900) { if (pubDate.year() > 1900) {

View File

@@ -6,6 +6,7 @@ import nu.marginalia.converting.model.DisqualifiedException;
import nu.marginalia.converting.model.DocumentHeaders; import nu.marginalia.converting.model.DocumentHeaders;
import nu.marginalia.converting.model.GeneratorType; import nu.marginalia.converting.model.GeneratorType;
import nu.marginalia.converting.model.ProcessedDocumentDetails; import nu.marginalia.converting.model.ProcessedDocumentDetails;
import nu.marginalia.converting.processor.AcceptableAds;
import nu.marginalia.converting.processor.DocumentClass; import nu.marginalia.converting.processor.DocumentClass;
import nu.marginalia.converting.processor.MetaRobotsTag; import nu.marginalia.converting.processor.MetaRobotsTag;
import nu.marginalia.converting.processor.logic.*; import nu.marginalia.converting.processor.logic.*;
@@ -32,11 +33,11 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard; import nu.marginalia.model.html.HtmlStandard;
import nu.marginalia.model.idx.DocumentFlags; import nu.marginalia.model.idx.DocumentFlags;
import nu.marginalia.model.idx.DocumentMetadata; import nu.marginalia.model.idx.DocumentMetadata;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document; import org.jsoup.nodes.Document;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.HashSet; import java.util.HashSet;
@@ -51,7 +52,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
private final double minDocumentQuality; private final double minDocumentQuality;
private final FeatureExtractor featureExtractor; private final FeatureExtractor featureExtractor;
private final TitleExtractor titleExtractor;
private final DocumentKeywordExtractor keywordExtractor; private final DocumentKeywordExtractor keywordExtractor;
private final PubDateSniffer pubDateSniffer; private final PubDateSniffer pubDateSniffer;
@@ -74,7 +74,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
@Named("min-document-quality") Double minDocumentQuality, @Named("min-document-quality") Double minDocumentQuality,
LanguageFilter languageFilter, LanguageFilter languageFilter,
FeatureExtractor featureExtractor, FeatureExtractor featureExtractor,
TitleExtractor titleExtractor,
DocumentKeywordExtractor keywordExtractor, DocumentKeywordExtractor keywordExtractor,
PubDateSniffer pubDateSniffer, PubDateSniffer pubDateSniffer,
DocumentLengthLogic documentLengthLogic, DocumentLengthLogic documentLengthLogic,
@@ -89,7 +88,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
this.minDocumentQuality = minDocumentQuality; this.minDocumentQuality = minDocumentQuality;
this.featureExtractor = featureExtractor; this.featureExtractor = featureExtractor;
this.titleExtractor = titleExtractor;
this.keywordExtractor = keywordExtractor; this.keywordExtractor = keywordExtractor;
this.pubDateSniffer = pubDateSniffer; this.pubDateSniffer = pubDateSniffer;
this.metaRobotsTag = metaRobotsTag; this.metaRobotsTag = metaRobotsTag;
@@ -108,19 +106,17 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
public DetailsWithWords createDetails(CrawledDocument crawledDocument, public DetailsWithWords createDetails(CrawledDocument crawledDocument,
LinkTexts linkTexts, LinkTexts linkTexts,
DocumentClass documentClass) DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException { throws DisqualifiedException, URISyntaxException, IOException {
String documentBody = crawledDocument.documentBody; if (languageFilter.isBlockedUnicodeRange(crawledDocument.documentBody(512))) {
if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualificationReason.LANGUAGE); throw new DisqualifiedException(DisqualificationReason.LANGUAGE);
} }
if (documentBody.length() > MAX_DOCUMENT_LENGTH_BYTES) { // 128kb Document doc = crawledDocument.parseBody();
documentBody = documentBody.substring(0, MAX_DOCUMENT_LENGTH_BYTES);
}
Document doc = Jsoup.parse(documentBody); if (AcceptableAds.hasAcceptableAdsTag(doc)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.ACCEPTABLE_ADS);
}
if (!metaRobotsTag.allowIndexingByMetaTag(doc)) { if (!metaRobotsTag.allowIndexingByMetaTag(doc)) {
throw new DisqualifiedException(DisqualificationReason.FORBIDDEN); throw new DisqualifiedException(DisqualificationReason.FORBIDDEN);
@@ -138,32 +134,33 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
} }
var prunedDoc = specialization.prune(doc); var prunedDoc = specialization.prune(doc);
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
var ret = new ProcessedDocumentDetails();
final int length = getLength(doc); final int length = getLength(doc);
final HtmlStandard standard = getHtmlStandard(doc); final HtmlStandard standard = getHtmlStandard(doc);
final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length); final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length);
if (isDisqualified(documentClass, url, quality, doc.title())) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
var ret = new ProcessedDocumentDetails();
ret.length = length; ret.length = length;
ret.standard = standard; ret.standard = standard;
ret.title = specialization.getTitle(doc, dld, crawledDocument.url); ret.title = specialization.getTitle(doc, dld, crawledDocument.url);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld); final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld);
ret.features = features; ret.features = features;
ret.quality = documentValuator.adjustQuality(quality, features); ret.quality = documentValuator.adjustQuality(quality, features);
ret.hashCode = dld.localitySensitiveHashCode(); ret.hashCode = dld.localitySensitiveHashCode();
if (isDisqualified(documentClass, url, quality, ret.title)) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true); PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true);
EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type()); EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type());

View File

@@ -71,7 +71,7 @@ public class PlainTextDocumentProcessorPlugin extends AbstractDocumentProcessorP
DocumentClass documentClass) DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException { throws DisqualifiedException, URISyntaxException {
String documentBody = crawledDocument.documentBody; String documentBody = crawledDocument.documentBody();
if (languageFilter.isBlockedUnicodeRange(documentBody)) { if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE); throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE);

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.DocumentMetadata;
import nu.marginalia.model.idx.WordFlags; import nu.marginalia.model.idx.WordFlags;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.List; import java.util.List;
@@ -50,7 +51,7 @@ public class SideloaderProcessing {
"OK", "OK",
"NP", "NP",
"", "",
body, body.getBytes(StandardCharsets.UTF_8),
false, false,
null, null,
null null

View File

@@ -127,7 +127,7 @@ public class EncyclopediaMarginaliaNuSideloader implements SideloadSource, AutoC
} }
fullHtml.append("</div></body></html>"); fullHtml.append("</div></body></html>");
var doc = sideloaderProcessing return sideloaderProcessing
.processDocument(fullUrl, .processDocument(fullUrl,
fullHtml.toString(), fullHtml.toString(),
List.of("encyclopedia", "wiki"), List.of("encyclopedia", "wiki"),
@@ -137,8 +137,6 @@ public class EncyclopediaMarginaliaNuSideloader implements SideloadSource, AutoC
anchorTextKeywords.getAnchorTextKeywords(domainLinks, new EdgeUrl(fullUrl)), anchorTextKeywords.getAnchorTextKeywords(domainLinks, new EdgeUrl(fullUrl)),
LocalDate.now().getYear(), LocalDate.now().getYear(),
10_000_000); 10_000_000);
return doc;
} }
private String normalizeUtf8(String url) { private String normalizeUtf8(String url) {

View File

@@ -106,11 +106,7 @@ public class WarcSideloader implements SideloadSource, AutoCloseable {
return false; return false;
var url = new EdgeUrl(warcResponse.target()); var url = new EdgeUrl(warcResponse.target());
if (!Objects.equals(url.getDomain(), domain)) { return Objects.equals(url.getDomain(), domain);
return false;
}
return true;
} catch (Exception e) { } catch (Exception e) {
logger.warn("Failed to process response", e); logger.warn("Failed to process response", e);
} }

View File

@@ -39,6 +39,9 @@ public class ConverterWriter implements AutoCloseable {
workerThread.start(); workerThread.start();
} }
/** Queue and eventually write the domain into the converter journal
* The domain object will be closed after it's processed.
* */
public void accept(@Nullable ConverterBatchWritableIf domain) { public void accept(@Nullable ConverterBatchWritableIf domain) {
if (null == domain) if (null == domain)
return; return;
@@ -72,15 +75,15 @@ public class ConverterWriter implements AutoCloseable {
if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) { if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) {
logger.warn("Skipping already logged item {}", id); logger.warn("Skipping already logged item {}", id);
}
else {
currentWriter.write(data);
workLog.logItem(id);
data.close(); data.close();
continue;
} }
currentWriter.write(data);
workLog.logItem(id);
switcher.tick(); switcher.tick();
data.close();
} }
} }
catch (Exception ex) { catch (Exception ex) {

View File

@@ -11,7 +11,6 @@ import nu.marginalia.slop.column.primitive.IntColumn;
import nu.marginalia.slop.column.primitive.LongColumn; import nu.marginalia.slop.column.primitive.LongColumn;
import nu.marginalia.slop.column.string.EnumColumn; import nu.marginalia.slop.column.string.EnumColumn;
import nu.marginalia.slop.column.string.StringColumn; import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.column.string.TxtStringColumn;
import nu.marginalia.slop.desc.StorageType; import nu.marginalia.slop.desc.StorageType;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
@@ -182,8 +181,8 @@ public record SlopDocumentRecord(
} }
// Basic information // Basic information
private static final TxtStringColumn domainsColumn = new TxtStringColumn("domain", StandardCharsets.UTF_8, StorageType.GZIP); private static final StringColumn domainsColumn = new StringColumn("domain", StandardCharsets.UTF_8, StorageType.GZIP);
private static final TxtStringColumn urlsColumn = new TxtStringColumn("url", StandardCharsets.UTF_8, StorageType.GZIP); private static final StringColumn urlsColumn = new StringColumn("url", StandardCharsets.UTF_8, StorageType.GZIP);
private static final VarintColumn ordinalsColumn = new VarintColumn("ordinal", StorageType.PLAIN); private static final VarintColumn ordinalsColumn = new VarintColumn("ordinal", StorageType.PLAIN);
private static final EnumColumn statesColumn = new EnumColumn("state", StandardCharsets.US_ASCII, StorageType.PLAIN); private static final EnumColumn statesColumn = new EnumColumn("state", StandardCharsets.US_ASCII, StorageType.PLAIN);
private static final StringColumn stateReasonsColumn = new StringColumn("stateReason", StandardCharsets.US_ASCII, StorageType.GZIP); private static final StringColumn stateReasonsColumn = new StringColumn("stateReason", StandardCharsets.US_ASCII, StorageType.GZIP);
@@ -211,7 +210,7 @@ public record SlopDocumentRecord(
private static final VarintCodedSequenceArrayColumn spansColumn = new VarintCodedSequenceArrayColumn("spans", StorageType.ZSTD); private static final VarintCodedSequenceArrayColumn spansColumn = new VarintCodedSequenceArrayColumn("spans", StorageType.ZSTD);
public static class KeywordsProjectionReader extends SlopTable { public static class KeywordsProjectionReader extends SlopTable {
private final TxtStringColumn.Reader domainsReader; private final StringColumn.Reader domainsReader;
private final VarintColumn.Reader ordinalsReader; private final VarintColumn.Reader ordinalsReader;
private final IntColumn.Reader htmlFeaturesReader; private final IntColumn.Reader htmlFeaturesReader;
private final LongColumn.Reader domainMetadataReader; private final LongColumn.Reader domainMetadataReader;
@@ -275,8 +274,8 @@ public record SlopDocumentRecord(
} }
public static class MetadataReader extends SlopTable { public static class MetadataReader extends SlopTable {
private final TxtStringColumn.Reader domainsReader; private final StringColumn.Reader domainsReader;
private final TxtStringColumn.Reader urlsReader; private final StringColumn.Reader urlsReader;
private final VarintColumn.Reader ordinalsReader; private final VarintColumn.Reader ordinalsReader;
private final StringColumn.Reader titlesReader; private final StringColumn.Reader titlesReader;
private final StringColumn.Reader descriptionsReader; private final StringColumn.Reader descriptionsReader;
@@ -332,8 +331,8 @@ public record SlopDocumentRecord(
} }
public static class Writer extends SlopTable { public static class Writer extends SlopTable {
private final TxtStringColumn.Writer domainsWriter; private final StringColumn.Writer domainsWriter;
private final TxtStringColumn.Writer urlsWriter; private final StringColumn.Writer urlsWriter;
private final VarintColumn.Writer ordinalsWriter; private final VarintColumn.Writer ordinalsWriter;
private final EnumColumn.Writer statesWriter; private final EnumColumn.Writer statesWriter;
private final StringColumn.Writer stateReasonsWriter; private final StringColumn.Writer stateReasonsWriter;

View File

@@ -98,7 +98,7 @@ public class ConvertingIntegrationTest {
@Test @Test
public void testMemexMarginaliaNuSideloadProcessing() throws IOException { public void testMemexMarginaliaNuSideloadProcessing() throws IOException {
var ret = domainProcessor.sideloadProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100); var ret = domainProcessor.simpleProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100);
assertNotNull(ret); assertNotNull(ret);
assertEquals("memex.marginalia.nu", ret.id()); assertEquals("memex.marginalia.nu", ret.id());
@@ -146,7 +146,7 @@ public class ConvertingIntegrationTest {
"OK", "OK",
"", "",
"", "",
readClassPathFile(p.toString()), readClassPathFile(p.toString()).getBytes(),
false, false,
null, null,
null null

View File

@@ -20,6 +20,7 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawledDomain; import nu.marginalia.model.crawldata.CrawledDomain;
import nu.marginalia.model.crawldata.SerializableCrawlData; import nu.marginalia.model.crawldata.SerializableCrawlData;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter; import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.junit.jupiter.api.*; import org.junit.jupiter.api.*;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -200,23 +201,23 @@ public class CrawlingThenConvertingIntegrationTest {
@Test @Test
public void crawlRobotsTxt() throws Exception { public void crawlRobotsTxt() throws Exception {
var specs = new CrawlerMain.CrawlSpecRecord("search.marginalia.nu", 5, var specs = new CrawlerMain.CrawlSpecRecord("marginalia-search.com", 5,
List.of("https://search.marginalia.nu/search?q=hello+world") List.of("https://marginalia-search.com/search?q=hello+world")
); );
CrawledDomain domain = crawl(specs); CrawledDomain domain = crawl(specs);
assertFalse(domain.doc.isEmpty()); assertFalse(domain.doc.isEmpty());
assertEquals("OK", domain.crawlerStatus); assertEquals("OK", domain.crawlerStatus);
assertEquals("search.marginalia.nu", domain.domain); assertEquals("marginalia-search.com", domain.domain);
Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet()); Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet());
assertTrue(allUrls.contains("https://search.marginalia.nu/search"), "We expect a record for entities that are forbidden"); assertTrue(allUrls.contains("https://marginalia-search.com/search"), "We expect a record for entities that are forbidden");
var output = process(); var output = process();
assertNotNull(output); assertNotNull(output);
assertFalse(output.documents.isEmpty()); assertFalse(output.documents.isEmpty());
assertEquals(new EdgeDomain("search.marginalia.nu"), output.domain); assertEquals(new EdgeDomain("marginalia-search.com"), output.domain);
assertEquals(DomainIndexingState.ACTIVE, output.state); assertEquals(DomainIndexingState.ACTIVE, output.state);
for (var doc : output.documents) { for (var doc : output.documents) {
@@ -246,7 +247,7 @@ public class CrawlingThenConvertingIntegrationTest {
private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception { private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception {
List<SerializableCrawlData> data = new ArrayList<>(); List<SerializableCrawlData> data = new ArrayList<>();
try (var recorder = new WarcRecorder(fileName); try (var recorder = new WarcRecorder(fileName, new BasicCookieStore());
var db = new DomainStateDb(dbTempFile)) var db = new DomainStateDb(dbTempFile))
{ {
new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain(); new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain();

View File

@@ -55,16 +55,19 @@ dependencies {
implementation libs.zstd implementation libs.zstd
implementation libs.jwarc implementation libs.jwarc
implementation libs.crawlercommons implementation libs.crawlercommons
implementation libs.okhttp3
implementation libs.jsoup implementation libs.jsoup
implementation libs.opencsv implementation libs.opencsv
implementation libs.fastutil implementation libs.fastutil
implementation libs.bundles.mariadb implementation libs.bundles.mariadb
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit testImplementation libs.bundles.junit
testImplementation libs.mockito testImplementation libs.mockito
testImplementation libs.wiremock
testImplementation project(':code:processes:test-data') testImplementation project(':code:processes:test-data')
} }

View File

@@ -2,11 +2,16 @@ package nu.marginalia.contenttype;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets;
/** Content type and charset of a document /** Content type and charset of a document
* @param contentType The content type, e.g. "text/html" * @param contentType The content type, e.g. "text/html"
* @param charset The charset, e.g. "UTF-8" * @param charset The charset, e.g. "UTF-8"
*/ */
public record ContentType(String contentType, String charset) { public record ContentType(String contentType, String charset) {
public static ContentType parse(String contentTypeHeader) { public static ContentType parse(String contentTypeHeader) {
if (contentTypeHeader == null || contentTypeHeader.isBlank()) if (contentTypeHeader == null || contentTypeHeader.isBlank())
return new ContentType(null, null); return new ContentType(null, null);
@@ -15,9 +20,31 @@ public record ContentType(String contentType, String charset) {
String contentType = parts[0].trim(); String contentType = parts[0].trim();
String charset = parts.length > 1 ? parts[1].trim() : "UTF-8"; String charset = parts.length > 1 ? parts[1].trim() : "UTF-8";
if (charset.toLowerCase().startsWith("charset=")) {
charset = charset.substring("charset=".length());
}
return new ContentType(contentType, charset); return new ContentType(contentType, charset);
} }
/** Best effort method for turning the provided charset string into a Java charset method,
* with some guesswork-heuristics for when it doesn't work
*/
public Charset asCharset() {
try {
if (Charset.isSupported(charset)) {
return Charset.forName(charset);
} else if (charset.equalsIgnoreCase("macintosh-latin")) {
return StandardCharsets.ISO_8859_1;
} else {
return StandardCharsets.UTF_8;
}
}
catch (IllegalCharsetNameException ex) { // thrown by Charset.isSupported()
return StandardCharsets.UTF_8;
}
}
public boolean is(String contentType) { public boolean is(String contentType) {
return this.contentType.equalsIgnoreCase(contentType); return this.contentType.equalsIgnoreCase(contentType);
} }

View File

@@ -1,9 +1,12 @@
package nu.marginalia.contenttype; package nu.marginalia.contenttype;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset; import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.charset.UnsupportedCharsetException;
import java.util.Map; import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
@@ -23,24 +26,25 @@ public class DocumentBodyToString {
return new String(data, charset); return new String(data, charset);
} }
public static Document getParsedData(ContentType type, byte[] data, int maxLength, String url) throws IOException {
final Charset charset;
if (type.charset() == null || type.charset().isBlank()) {
charset = StandardCharsets.UTF_8;
} else {
charset = charsetMap.computeIfAbsent(type, DocumentBodyToString::computeCharset);
}
ByteArrayInputStream bais = new ByteArrayInputStream(data, 0, Math.min(data.length, maxLength));
return Jsoup.parse(bais, charset.name(), url);
}
private static Charset computeCharset(ContentType type) { private static Charset computeCharset(ContentType type) {
try { if (type.charset() == null || type.charset().isBlank())
if (type.charset() == null || type.charset().isBlank())
return StandardCharsets.UTF_8;
else {
return Charset.forName(type.charset());
}
}
catch (IllegalCharsetNameException ex) {
// Fall back to UTF-8 if we don't understand what this is. It's *probably* fine? Maybe?
return StandardCharsets.UTF_8; return StandardCharsets.UTF_8;
} else {
catch (UnsupportedCharsetException ex) { return type.asCharset();
// This is usually like Macintosh Latin
// (https://en.wikipedia.org/wiki/Macintosh_Latin_encoding)
//
// It's close enough to 8859-1 to serve
return StandardCharsets.ISO_8859_1;
} }
} }
} }

View File

@@ -19,22 +19,19 @@ import nu.marginalia.crawl.retreival.DomainProber;
import nu.marginalia.crawl.warc.WarcArchiverFactory; import nu.marginalia.crawl.warc.WarcArchiverFactory;
import nu.marginalia.crawl.warc.WarcArchiverIf; import nu.marginalia.crawl.warc.WarcArchiverIf;
import nu.marginalia.db.DomainBlacklist; import nu.marginalia.db.DomainBlacklist;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.CrawlerOutputFile; import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.mq.MessageQueueFactory; import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import nu.marginalia.process.ProcessConfiguration; import nu.marginalia.process.ProcessConfiguration;
import nu.marginalia.process.ProcessConfigurationModule; import nu.marginalia.process.ProcessConfigurationModule;
import nu.marginalia.process.ProcessMainClass; import nu.marginalia.process.ProcessMainClass;
import nu.marginalia.process.control.ProcessHeartbeatImpl; import nu.marginalia.process.control.ProcessHeartbeatImpl;
import nu.marginalia.process.log.WorkLog; import nu.marginalia.process.log.WorkLog;
import nu.marginalia.service.module.DatabaseModule; import nu.marginalia.service.module.DatabaseModule;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService; import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageId; import nu.marginalia.storage.model.FileStorageId;
import nu.marginalia.util.SimpleBlockingThreadPool; import nu.marginalia.util.SimpleBlockingThreadPool;
import okhttp3.ConnectionPool;
import okhttp3.Dispatcher;
import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -44,10 +41,7 @@ import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.nio.file.StandardCopyOption; import java.nio.file.StandardCopyOption;
import java.security.Security; import java.security.Security;
import java.util.ArrayList; import java.util.*;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicInteger;
@@ -85,6 +79,7 @@ public class CrawlerMain extends ProcessMainClass {
@Inject @Inject
public CrawlerMain(UserAgent userAgent, public CrawlerMain(UserAgent userAgent,
HttpFetcherImpl httpFetcher,
ProcessHeartbeatImpl heartbeat, ProcessHeartbeatImpl heartbeat,
MessageQueueFactory messageQueueFactory, DomainProber domainProber, MessageQueueFactory messageQueueFactory, DomainProber domainProber,
FileStorageService fileStorageService, FileStorageService fileStorageService,
@@ -98,6 +93,7 @@ public class CrawlerMain extends ProcessMainClass {
super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX); super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX);
this.userAgent = userAgent; this.userAgent = userAgent;
this.fetcher = httpFetcher;
this.heartbeat = heartbeat; this.heartbeat = heartbeat;
this.domainProber = domainProber; this.domainProber = domainProber;
this.fileStorageService = fileStorageService; this.fileStorageService = fileStorageService;
@@ -107,14 +103,19 @@ public class CrawlerMain extends ProcessMainClass {
this.blacklist = blacklist; this.blacklist = blacklist;
this.node = processConfiguration.node(); this.node = processConfiguration.node();
SimpleBlockingThreadPool.ThreadType threadType;
if (Boolean.getBoolean("crawler.useVirtualThreads")) {
threadType = SimpleBlockingThreadPool.ThreadType.VIRTUAL;
}
else {
threadType = SimpleBlockingThreadPool.ThreadType.PLATFORM;
}
pool = new SimpleBlockingThreadPool("CrawlerPool", pool = new SimpleBlockingThreadPool("CrawlerPool",
Integer.getInteger("crawler.poolSize", 256), Integer.getInteger("crawler.poolSize", 256),
1); 1,
threadType);
fetcher = new HttpFetcherImpl(userAgent,
new Dispatcher(),
new ConnectionPool(5, 10, TimeUnit.SECONDS)
);
// Wait for the blacklist to be loaded before starting the crawl // Wait for the blacklist to be loaded before starting the crawl
blacklist.waitUntilLoaded(); blacklist.waitUntilLoaded();
@@ -132,6 +133,10 @@ public class CrawlerMain extends ProcessMainClass {
System.setProperty("sun.net.client.defaultConnectTimeout", "30000"); System.setProperty("sun.net.client.defaultConnectTimeout", "30000");
System.setProperty("sun.net.client.defaultReadTimeout", "30000"); System.setProperty("sun.net.client.defaultReadTimeout", "30000");
// Set the maximum number of connections to keep alive in the connection pool
System.setProperty("jdk.httpclient.idleTimeout", "15"); // 15 seconds
System.setProperty("jdk.httpclient.connectionPoolSize", "256");
// We don't want to use too much memory caching sessions for https // We don't want to use too much memory caching sessions for https
System.setProperty("javax.net.ssl.sessionCacheSize", "2048"); System.setProperty("javax.net.ssl.sessionCacheSize", "2048");
@@ -225,10 +230,7 @@ public class CrawlerMain extends ProcessMainClass {
logger.info("Loaded {} domains", crawlSpecRecords.size()); logger.info("Loaded {} domains", crawlSpecRecords.size());
// Shuffle the domains to ensure we get a good mix of domains in each crawl, crawlSpecRecords.sort(crawlSpecArrangement(crawlSpecRecords));
// so that e.g. the big domains don't get all crawled at once, or we end up
// crawling the same server in parallel from different subdomains...
Collections.shuffle(crawlSpecRecords);
// First a validation run to ensure the file is all good to parse // First a validation run to ensure the file is all good to parse
if (crawlSpecRecords.isEmpty()) { if (crawlSpecRecords.isEmpty()) {
@@ -249,9 +251,14 @@ public class CrawlerMain extends ProcessMainClass {
// (this happens when the process is restarted after a crash or a shutdown) // (this happens when the process is restarted after a crash or a shutdown)
tasksDone.set(workLog.countFinishedJobs()); tasksDone.set(workLog.countFinishedJobs());
// Create crawl tasks and submit them to the pool for execution // List of deferred tasks used to ensure beneficial scheduling of domains with regard to DomainLocks,
// merely shuffling the domains tends to lead to a lot of threads being blocked waiting for a semphore,
// this will more aggressively attempt to schedule the jobs to avoid blocking
List<CrawlTask> taskList = new ArrayList<>();
// Create crawl tasks
for (CrawlSpecRecord crawlSpec : crawlSpecRecords) { for (CrawlSpecRecord crawlSpec : crawlSpecRecords) {
if (workLog.isJobFinished(crawlSpec.domain())) if (workLog.isJobFinished(crawlSpec.domain))
continue; continue;
var task = new CrawlTask( var task = new CrawlTask(
@@ -262,11 +269,22 @@ public class CrawlerMain extends ProcessMainClass {
domainStateDb, domainStateDb,
workLog); workLog);
if (pendingCrawlTasks.putIfAbsent(crawlSpec.domain(), task) == null) { // Try to run immediately, to avoid unnecessarily keeping the entire work set in RAM
pool.submitQuietly(task); if (!trySubmitDeferredTask(task)) {
// Otherwise add to the taskList for deferred execution
taskList.add(task);
} }
} }
// Schedule viable tasks for execution until list is empty
while (!taskList.isEmpty()) {
taskList.removeIf(this::trySubmitDeferredTask);
// Add a small pause here to avoid busy looping toward the end of the execution cycle when
// we might have no new viable tasks to run for hours on end
TimeUnit.MILLISECONDS.sleep(50);
}
logger.info("Shutting down the pool, waiting for tasks to complete..."); logger.info("Shutting down the pool, waiting for tasks to complete...");
pool.shutDown(); pool.shutDown();
@@ -291,6 +309,51 @@ public class CrawlerMain extends ProcessMainClass {
} }
} }
/** Create a comparator that sorts the crawl specs in a way that is beneficial for the crawl,
* we want to enqueue domains that have common top domains first, but otherwise have a random
* order.
* <p></p>
* Note, we can't use hash codes for randomization as it is not desirable to have the same order
* every time the process is restarted (and CrawlSpecRecord is a record, which defines equals and
* hashcode based on the fields).
* */
private Comparator<CrawlSpecRecord> crawlSpecArrangement(List<CrawlSpecRecord> records) {
Random r = new Random();
Map<String, Integer> topDomainCounts = new HashMap<>(4 + (int) Math.sqrt(records.size()));
Map<String, Integer> randomOrder = new HashMap<>(records.size());
for (var spec : records) {
topDomainCounts.merge(EdgeDomain.getTopDomain(spec.domain), 1, Integer::sum);
randomOrder.put(spec.domain, r.nextInt());
}
return Comparator.comparing((CrawlSpecRecord spec) -> topDomainCounts.getOrDefault(EdgeDomain.getTopDomain(spec.domain), 0) >= 8)
.reversed()
.thenComparing(spec -> randomOrder.get(spec.domain))
.thenComparing(Record::hashCode); // non-deterministic tie-breaker to
}
/** Submit a task for execution if it can be run, returns true if it was submitted
* or if it can be discarded */
private boolean trySubmitDeferredTask(CrawlTask task) {
if (!task.canRun()) {
return false;
}
if (pendingCrawlTasks.putIfAbsent(task.domain, task) != null) {
return true; // task has already run, duplicate in crawl specs
}
try {
// This blocks the caller when the pool is full
pool.submitQuietly(task);
return true;
}
catch (RuntimeException ex) {
logger.error("Failed to submit task " + task.domain, ex);
return false;
}
}
public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception { public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception {
runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath()); runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath());
@@ -348,12 +411,23 @@ public class CrawlerMain extends ProcessMainClass {
this.id = Integer.toHexString(domain.hashCode()); this.id = Integer.toHexString(domain.hashCode());
} }
/** Best effort indicator whether we could start this now without getting stuck in
* DomainLocks purgatory */
public boolean canRun() {
return domainLocks.canLock(new EdgeDomain(domain));
}
@Override @Override
public void run() throws Exception { public void run() throws Exception {
if (workLog.isJobFinished(domain)) { // No-Op
logger.info("Omitting task {}, as it is already run", domain);
return;
}
Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE); Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE);
Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP); Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP);
Path parquetFile = CrawlerOutputFile.createParquetPath(outputDir, id, domain); Path slopFile = CrawlerOutputFile.createSlopPath(outputDir, id, domain);
// Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data // Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data
// while writing to the same file name as before // while writing to the same file name as before
@@ -364,10 +438,10 @@ public class CrawlerMain extends ProcessMainClass {
Files.deleteIfExists(tempFile); Files.deleteIfExists(tempFile);
} }
try (var warcRecorder = new WarcRecorder(newWarcFile); // write to a temp file for now try (var warcRecorder = new WarcRecorder(newWarcFile, fetcher); // write to a temp file for now
var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder); var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder);
CrawlDataReference reference = getReference(); CrawlDataReference reference = getReference()
) )
{ {
// Resume the crawl if it was aborted // Resume the crawl if it was aborted
if (Files.exists(tempFile)) { if (Files.exists(tempFile)) {
@@ -387,15 +461,15 @@ public class CrawlerMain extends ProcessMainClass {
reference.delete(); reference.delete();
// Convert the WARC file to Parquet // Convert the WARC file to Parquet
CrawledDocumentParquetRecordFileWriter SlopCrawlDataRecord
.convertWarc(domain, userAgent, newWarcFile, parquetFile); .convertWarc(domain, userAgent, newWarcFile, slopFile);
// Optionally archive the WARC file if full retention is enabled, // Optionally archive the WARC file if full retention is enabled,
// otherwise delete it: // otherwise delete it:
warcArchiver.consumeWarc(newWarcFile, domain); warcArchiver.consumeWarc(newWarcFile, domain);
// Mark the domain as finished in the work log // Mark the domain as finished in the work log
workLog.setJobToFinished(domain, parquetFile.toString(), size); workLog.setJobToFinished(domain, slopFile.toString(), size);
// Update the progress bar // Update the progress bar
heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks); heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks);
@@ -405,7 +479,7 @@ public class CrawlerMain extends ProcessMainClass {
logger.error("Error fetching domain " + domain, e); logger.error("Error fetching domain " + domain, e);
} }
finally { finally {
// We don't need to double-count these; it's also kept int he workLog // We don't need to double-count these; it's also kept in the workLog
pendingCrawlTasks.remove(domain); pendingCrawlTasks.remove(domain);
Thread.currentThread().setName("[idle]"); Thread.currentThread().setName("[idle]");
@@ -416,11 +490,22 @@ public class CrawlerMain extends ProcessMainClass {
private CrawlDataReference getReference() { private CrawlDataReference getReference() {
try { try {
return new CrawlDataReference(CrawledDomainReader.createDataStream(outputDir, domain, id)); Path slopPath = CrawlerOutputFile.getSlopPath(outputDir, id, domain);
} catch (IOException e) { if (Files.exists(slopPath)) {
return new CrawlDataReference(slopPath);
}
Path parquetPath = CrawlerOutputFile.getParquetPath(outputDir, id, domain);
if (Files.exists(parquetPath)) {
slopPath = migrateParquetData(parquetPath, domain, outputDir);
return new CrawlDataReference(slopPath);
}
} catch (Exception e) {
logger.debug("Failed to read previous crawl data for {}", specification.domain()); logger.debug("Failed to read previous crawl data for {}", specification.domain());
return new CrawlDataReference();
} }
return new CrawlDataReference();
} }
} }
@@ -480,4 +565,20 @@ public class CrawlerMain extends ProcessMainClass {
} }
} }
} }
// Migrate from parquet to slop if necessary
//
// This must be synchronized as chewing through parquet files in parallel leads to enormous memory overhead
private synchronized Path migrateParquetData(Path inputPath, String domain, Path crawlDataRoot) throws IOException {
if (!inputPath.toString().endsWith(".parquet")) {
return inputPath;
}
Path outputFile = CrawlerOutputFile.createSlopPath(crawlDataRoot, Integer.toHexString(domain.hashCode()), domain);
SlopCrawlDataRecord.convertFromParquet(inputPath, outputFile);
return outputFile;
}
} }

View File

@@ -1,5 +1,8 @@
package nu.marginalia.crawl; package nu.marginalia.crawl;
import com.google.inject.Inject;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageType;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -8,7 +11,9 @@ import java.nio.file.Path;
import java.sql.Connection; import java.sql.Connection;
import java.sql.DriverManager; import java.sql.DriverManager;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.Duration;
import java.time.Instant; import java.time.Instant;
import java.util.Objects;
import java.util.Optional; import java.util.Optional;
/** Supplemental sqlite database for storing the summary of a crawl. /** Supplemental sqlite database for storing the summary of a crawl.
@@ -20,6 +25,17 @@ public class DomainStateDb implements AutoCloseable {
private final Connection connection; private final Connection connection;
public record CrawlMeta(
String domainName,
Instant lastFullCrawl,
Duration recrawlTime,
Duration crawlTime,
int recrawlErrors,
int crawlChanges,
int totalCrawlSize
) {}
public record SummaryRecord( public record SummaryRecord(
String domainName, String domainName,
Instant lastUpdated, Instant lastUpdated,
@@ -60,7 +76,31 @@ public class DomainStateDb implements AutoCloseable {
} }
public DomainStateDb(Path filename) throws SQLException { public record FaviconRecord(String contentType, byte[] imageData) {}
@Inject
public DomainStateDb(FileStorageService fileStorageService) throws SQLException {
this(findFilename(fileStorageService));
}
private static Path findFilename(FileStorageService fileStorageService) throws SQLException {
var fsId = fileStorageService.getOnlyActiveFileStorage(FileStorageType.CRAWL_DATA);
if (fsId.isPresent()) {
var fs = fileStorageService.getStorage(fsId.get());
return fs.asPath().resolve("domainstate.db");
}
else {
return null;
}
}
public DomainStateDb(@Nullable Path filename) throws SQLException {
if (null == filename) {
connection = null;
return;
}
String sqliteDbString = "jdbc:sqlite:" + filename.toString(); String sqliteDbString = "jdbc:sqlite:" + filename.toString();
connection = DriverManager.getConnection(sqliteDbString); connection = DriverManager.getConnection(sqliteDbString);
@@ -74,18 +114,102 @@ public class DomainStateDb implements AutoCloseable {
feedUrl TEXT feedUrl TEXT
) )
"""); """);
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS crawl_meta (
domain TEXT PRIMARY KEY,
lastFullCrawlEpochMs LONG NOT NULL,
recrawlTimeMs LONG NOT NULL,
recrawlErrors INTEGER NOT NULL,
crawlTimeMs LONG NOT NULL,
crawlChanges INTEGER NOT NULL,
totalCrawlSize INTEGER NOT NULL
)
""");
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS favicon (
domain TEXT PRIMARY KEY,
contentType TEXT NOT NULL,
icon BLOB NOT NULL
)
""");
stmt.execute("PRAGMA journal_mode=WAL"); stmt.execute("PRAGMA journal_mode=WAL");
} }
} }
@Override @Override
public void close() throws SQLException { public void close() throws SQLException {
connection.close(); if (connection != null) {
connection.close();
}
} }
public boolean isAvailable() {
return connection != null;
}
public void saveIcon(String domain, FaviconRecord faviconRecord) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO favicon (domain, contentType, icon)
VALUES(?, ?, ?)
""")) {
stmt.setString(1, domain);
stmt.setString(2, Objects.requireNonNullElse(faviconRecord.contentType, "application/octet-stream"));
stmt.setBytes(3, faviconRecord.imageData);
stmt.executeUpdate();
}
catch (SQLException ex) {
logger.error("Failed to insert favicon", ex);
}
}
public Optional<FaviconRecord> getIcon(String domain) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("SELECT contentType, icon FROM favicon WHERE DOMAIN = ?")) {
stmt.setString(1, domain);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(
new FaviconRecord(
rs.getString("contentType"),
rs.getBytes("icon")
)
);
}
} catch (SQLException e) {
logger.error("Failed to retrieve favicon", e);
}
return Optional.empty();
}
public void save(CrawlMeta crawlMeta) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO crawl_meta (domain, lastFullCrawlEpochMs, recrawlTimeMs, recrawlErrors, crawlTimeMs, crawlChanges, totalCrawlSize)
VALUES (?, ?, ?, ?, ?, ?, ?)
""")) {
stmt.setString(1, crawlMeta.domainName());
stmt.setLong(2, crawlMeta.lastFullCrawl.toEpochMilli());
stmt.setLong(3, crawlMeta.recrawlTime.toMillis());
stmt.setInt(4, crawlMeta.recrawlErrors);
stmt.setLong(5, crawlMeta.crawlTime.toMillis());
stmt.setInt(6, crawlMeta.crawlChanges);
stmt.setInt(7, crawlMeta.totalCrawlSize);
stmt.executeUpdate();
} catch (SQLException e) {
logger.error("Failed to insert crawl meta record", e);
}
}
public void save(SummaryRecord record) { public void save(SummaryRecord record) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement(""" try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl) INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl)
VALUES (?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?)
@@ -101,7 +225,38 @@ public class DomainStateDb implements AutoCloseable {
} }
} }
public Optional<SummaryRecord> get(String domainName) { public Optional<CrawlMeta> getMeta(String domainName) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("""
SELECT domain, lastFullCrawlEpochMs, recrawlTimeMs, recrawlErrors, crawlTimeMs, crawlChanges, totalCrawlSize
FROM crawl_meta
WHERE domain = ?
""")) {
stmt.setString(1, domainName);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(new CrawlMeta(
rs.getString("domain"),
Instant.ofEpochMilli(rs.getLong("lastFullCrawlEpochMs")),
Duration.ofMillis(rs.getLong("recrawlTimeMs")),
Duration.ofMillis(rs.getLong("crawlTimeMs")),
rs.getInt("recrawlErrors"),
rs.getInt("crawlChanges"),
rs.getInt("totalCrawlSize")
));
}
} catch (SQLException ex) {
logger.error("Failed to get crawl meta record", ex);
}
return Optional.empty();
}
public Optional<SummaryRecord> getSummary(String domainName) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement(""" try (var stmt = connection.prepareStatement("""
SELECT domain, lastUpdatedEpochMs, state, stateDesc, feedUrl SELECT domain, lastUpdatedEpochMs, state, stateDesc, feedUrl
FROM summary FROM summary

View File

@@ -1,6 +1,6 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import okhttp3.Request; import org.apache.hc.client5.http.classic.methods.HttpGet;
/** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */ /** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */
public record ContentTags(String etag, String lastMod) { public record ContentTags(String etag, String lastMod) {
@@ -17,14 +17,14 @@ public record ContentTags(String etag, String lastMod) {
} }
/** Paints the tags onto the request builder. */ /** Paints the tags onto the request builder. */
public void paint(Request.Builder getBuilder) { public void paint(HttpGet request) {
if (etag != null) { if (etag != null) {
getBuilder.addHeader("If-None-Match", etag); request.addHeader("If-None-Match", etag);
} }
if (lastMod != null) { if (lastMod != null) {
getBuilder.addHeader("If-Modified-Since", lastMod); request.addHeader("If-Modified-Since", lastMod);
} }
} }
} }

View File

@@ -1,33 +1,14 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import okhttp3.Cookie; import java.io.IOException;
import okhttp3.CookieJar; import java.net.CookieHandler;
import okhttp3.HttpUrl; import java.net.URI;
import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
public class Cookies { public class Cookies extends CookieHandler {
final ThreadLocal<ConcurrentHashMap<String, List<Cookie>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new); final ThreadLocal<ConcurrentHashMap<String, List<String>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new);
public CookieJar getJar() {
return new CookieJar() {
@Override
public void saveFromResponse(HttpUrl url, List<Cookie> cookies) {
if (!cookies.isEmpty()) {
cookieJar.get().put(url.host(), cookies);
}
}
@Override
public List<Cookie> loadForRequest(HttpUrl url) {
return cookieJar.get().getOrDefault(url.host(), Collections.emptyList());
}
};
}
public void clear() { public void clear() {
cookieJar.get().clear(); cookieJar.get().clear();
@@ -38,6 +19,16 @@ public class Cookies {
} }
public List<String> getCookies() { public List<String> getCookies() {
return cookieJar.get().values().stream().flatMap(List::stream).map(Cookie::toString).toList(); return cookieJar.get().values().stream().flatMap(List::stream).toList();
}
@Override
public Map<String, List<String>> get(URI uri, Map<String, List<String>> requestHeaders) throws IOException {
return cookieJar.get();
}
@Override
public void put(URI uri, Map<String, List<String>> responseHeaders) throws IOException {
cookieJar.get().putAll(responseHeaders);
} }
} }

View File

@@ -3,31 +3,31 @@ package nu.marginalia.crawl.fetcher;
import com.google.inject.ImplementedBy; import com.google.inject.ImplementedBy;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.apache.hc.client5.http.cookie.CookieStore;
import java.util.List; import java.util.List;
@ImplementedBy(HttpFetcherImpl.class) @ImplementedBy(HttpFetcherImpl.class)
public interface HttpFetcher { public interface HttpFetcher extends AutoCloseable {
void setAllowAllContentTypes(boolean allowAllContentTypes); void setAllowAllContentTypes(boolean allowAllContentTypes);
List<String> getCookies(); CookieStore getCookies();
void clearCookies(); void clearCookies();
DomainProbeResult probeDomain(EdgeUrl url); DomainProbeResult probeDomain(EdgeUrl url);
ContentTypeProbeResult probeContentType(
EdgeUrl url,
WarcRecorder recorder,
ContentTags tags) throws HttpFetcherImpl.RateLimitException;
HttpFetchResult fetchContent(EdgeUrl url, HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder recorder, WarcRecorder recorder,
CrawlDelayTimer timer,
ContentTags tags, ContentTags tags,
ProbeType probeType) throws HttpFetcherImpl.RateLimitException, Exception; ProbeType probeType);
List<EdgeUrl> fetchSitemapUrls(String rootSitemapUrl, CrawlDelayTimer delayTimer);
SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder); SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder);
@@ -43,6 +43,7 @@ public interface HttpFetcher {
/** This domain redirects to another domain */ /** This domain redirects to another domain */
record Redirect(EdgeDomain domain) implements DomainProbeResult {} record Redirect(EdgeDomain domain) implements DomainProbeResult {}
record RedirectSameDomain_Internal(EdgeUrl domain) implements DomainProbeResult {}
/** If the retrieval of the probed url was successful, return the url as it was fetched /** If the retrieval of the probed url was successful, return the url as it was fetched
* (which may be different from the url we probed, if we attempted another URL schema). * (which may be different from the url we probed, if we attempted another URL schema).
@@ -53,7 +54,10 @@ public interface HttpFetcher {
} }
sealed interface ContentTypeProbeResult { sealed interface ContentTypeProbeResult {
record NoOp() implements ContentTypeProbeResult {}
record Ok(EdgeUrl resolvedUrl) implements ContentTypeProbeResult { } record Ok(EdgeUrl resolvedUrl) implements ContentTypeProbeResult { }
record HttpError(int statusCode, String message) implements ContentTypeProbeResult { }
record Redirect(EdgeUrl location) implements ContentTypeProbeResult { }
record BadContentType(String contentType, int statusCode) implements ContentTypeProbeResult { } record BadContentType(String contentType, int statusCode) implements ContentTypeProbeResult { }
record Timeout(java.lang.Exception ex) implements ContentTypeProbeResult { } record Timeout(java.lang.Exception ex) implements ContentTypeProbeResult { }
record Exception(java.lang.Exception ex) implements ContentTypeProbeResult { } record Exception(java.lang.Exception ex) implements ContentTypeProbeResult { }

View File

@@ -1,78 +1,152 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import com.google.inject.Inject; import com.google.inject.Inject;
import com.google.inject.Singleton;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import crawlercommons.robots.SimpleRobotRulesParser; import crawlercommons.robots.SimpleRobotRulesParser;
import nu.marginalia.UserAgent; import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.socket.FastTerminatingSocketFactory;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.crawl.fetcher.socket.NoSecuritySSL;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.ContentTypeLogic; import nu.marginalia.model.body.ContentTypeLogic;
import nu.marginalia.model.body.DocumentBodyExtractor; import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import okhttp3.ConnectionPool; import org.apache.hc.client5.http.ConnectionKeepAliveStrategy;
import okhttp3.Dispatcher; import org.apache.hc.client5.http.HttpRequestRetryStrategy;
import okhttp3.OkHttpClient; import org.apache.hc.client5.http.classic.HttpClient;
import okhttp3.Request; import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.config.ConnectionConfig;
import org.apache.hc.client5.http.config.RequestConfig;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.apache.hc.client5.http.cookie.CookieStore;
import org.apache.hc.client5.http.cookie.StandardCookieSpec;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManagerBuilder;
import org.apache.hc.client5.http.ssl.DefaultClientTlsStrategy;
import org.apache.hc.core5.http.*;
import org.apache.hc.core5.http.io.HttpClientResponseHandler;
import org.apache.hc.core5.http.io.SocketConfig;
import org.apache.hc.core5.http.io.entity.EntityUtils;
import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
import org.apache.hc.core5.http.message.MessageSupport;
import org.apache.hc.core5.http.protocol.HttpContext;
import org.apache.hc.core5.util.TimeValue;
import org.apache.hc.core5.util.Timeout;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.parser.Parser;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import org.slf4j.Marker;
import org.slf4j.MarkerFactory;
import javax.net.ssl.X509TrustManager; import javax.net.ssl.SSLContext;
import java.io.InterruptedIOException; import java.io.IOException;
import java.net.SocketTimeoutException;
import java.net.URISyntaxException;
import java.security.NoSuchAlgorithmException;
import java.time.Duration; import java.time.Duration;
import java.util.List; import java.util.*;
import java.util.Objects; import java.util.concurrent.Semaphore;
import java.util.Optional;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
public class HttpFetcherImpl implements HttpFetcher { @Singleton
public class HttpFetcherImpl implements HttpFetcher, HttpRequestRetryStrategy {
private final Logger logger = LoggerFactory.getLogger(getClass()); private final Logger logger = LoggerFactory.getLogger(getClass());
private final String userAgentString; private final String userAgentString;
private final String userAgentIdentifier; private final String userAgentIdentifier;
private final Cookies cookies = new Cookies();
private final CookieStore cookies = new BasicCookieStore();
private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser(); private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser();
private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic(); private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic();
private final Marker crawlerAuditMarker = MarkerFactory.getMarker("CRAWLER");
private final LinkParser linkParser = new LinkParser();
@Override @Override
public void setAllowAllContentTypes(boolean allowAllContentTypes) { public void setAllowAllContentTypes(boolean allowAllContentTypes) {
contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes); contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes);
} }
private final OkHttpClient client; private final CloseableHttpClient client;
private static final FastTerminatingSocketFactory ftSocketFactory = new FastTerminatingSocketFactory(); private CloseableHttpClient createClient() throws NoSuchAlgorithmException {
final ConnectionConfig connectionConfig = ConnectionConfig.custom()
.setSocketTimeout(10, TimeUnit.SECONDS)
.setConnectTimeout(30, TimeUnit.SECONDS)
.build();
private OkHttpClient createClient(Dispatcher dispatcher, ConnectionPool pool) { final PoolingHttpClientConnectionManager connectionManager = PoolingHttpClientConnectionManagerBuilder.create()
var builder = new OkHttpClient.Builder(); .setMaxConnPerRoute(2)
if (dispatcher != null) { .setMaxConnTotal(5000)
builder.dispatcher(dispatcher); .setDefaultConnectionConfig(connectionConfig)
} .setTlsSocketStrategy(new DefaultClientTlsStrategy(SSLContext.getDefault()))
.build();
return builder.sslSocketFactory(NoSecuritySSL.buildSocketFactory(), (X509TrustManager) NoSecuritySSL.trustAllCerts[0]) connectionManager.setDefaultSocketConfig(SocketConfig.custom()
.socketFactory(ftSocketFactory) .setSoLinger(TimeValue.ofSeconds(15))
.hostnameVerifier(NoSecuritySSL.buildHostnameVerifyer()) .setSoTimeout(Timeout.ofSeconds(10))
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor()) .build()
.connectionPool(pool) );
.cookieJar(cookies.getJar())
.followRedirects(true)
.followSslRedirects(true)
.connectTimeout(8, TimeUnit.SECONDS)
.readTimeout(10, TimeUnit.SECONDS)
.writeTimeout(10, TimeUnit.SECONDS)
.build();
final RequestConfig defaultRequestConfig = RequestConfig.custom()
.setCookieSpec(StandardCookieSpec.RELAXED)
.setResponseTimeout(10, TimeUnit.SECONDS)
.setConnectionRequestTimeout(5, TimeUnit.MINUTES)
.build();
return HttpClients.custom()
.setDefaultCookieStore(cookies)
.setConnectionManager(connectionManager)
.setRetryStrategy(this)
.setKeepAliveStrategy(new ConnectionKeepAliveStrategy() {
// Default keep-alive duration is 3 minutes, but this is too long for us,
// as we are either going to re-use it fairly quickly or close it for a long time.
//
// So we set it to 30 seconds or clamp the server-provided value to a minimum of 10 seconds.
private static final TimeValue defaultValue = TimeValue.ofSeconds(30);
@Override
public TimeValue getKeepAliveDuration(HttpResponse response, HttpContext context) {
final Iterator<HeaderElement> it = MessageSupport.iterate(response, HeaderElements.KEEP_ALIVE);
while (it.hasNext()) {
final HeaderElement he = it.next();
final String param = he.getName();
final String value = he.getValue();
if (value == null)
continue;
if (!"timeout".equalsIgnoreCase(param))
continue;
try {
long timeout = Long.parseLong(value);
timeout = Math.clamp(timeout, 30, defaultValue.toSeconds());
return TimeValue.ofSeconds(timeout);
} catch (final NumberFormatException ignore) {
break;
}
}
return defaultValue;
}
})
.disableRedirectHandling()
.setDefaultRequestConfig(defaultRequestConfig)
.build();
} }
@Override @Override
public List<String> getCookies() { public CookieStore getCookies() {
return cookies.getCookies(); return cookies;
} }
@Override @Override
@@ -81,26 +155,32 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
@Inject @Inject
public HttpFetcherImpl(UserAgent userAgent, public HttpFetcherImpl(UserAgent userAgent)
Dispatcher dispatcher,
ConnectionPool connectionPool)
{ {
this.client = createClient(dispatcher, connectionPool); try {
this.client = createClient();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
this.userAgentString = userAgent.uaString(); this.userAgentString = userAgent.uaString();
this.userAgentIdentifier = userAgent.uaIdentifier(); this.userAgentIdentifier = userAgent.uaIdentifier();
} }
public HttpFetcherImpl(String userAgent) { public HttpFetcherImpl(String userAgent) {
this.client = createClient(null, new ConnectionPool()); try {
this.client = createClient();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
this.userAgentString = userAgent; this.userAgentString = userAgent;
this.userAgentIdentifier = userAgent; this.userAgentIdentifier = userAgent;
} }
// Not necessary in prod, but useful in test // Not necessary in prod, but useful in test
public void close() { public void close() throws IOException {
client.dispatcher().executorService().shutdown(); client.close();
client.connectionPool().evictAll();
} }
/** /**
* Probe the domain to see if it is reachable, attempting to identify which schema to use, * Probe the domain to see if it is reachable, attempting to identify which schema to use,
* and if there are any redirects. This is done by one or more HEAD requests. * and if there are any redirects. This is done by one or more HEAD requests.
@@ -110,23 +190,94 @@ public class HttpFetcherImpl implements HttpFetcher {
*/ */
@Override @Override
public DomainProbeResult probeDomain(EdgeUrl url) { public DomainProbeResult probeDomain(EdgeUrl url) {
var head = new Request.Builder().head().addHeader("User-agent", userAgentString) List<EdgeUrl> urls = new ArrayList<>();
.url(url.toString()) urls.add(url);
.build();
var call = client.newCall(head); int redirects = 0;
AtomicBoolean tryGet = new AtomicBoolean(false);
try (var rsp = call.execute()) { while (!urls.isEmpty() && ++redirects < 5) {
EdgeUrl requestUrl = new EdgeUrl(rsp.request().url().toString()); ClassicHttpRequest request;
if (!Objects.equals(requestUrl.domain, url.domain)) { EdgeUrl topUrl = urls.removeFirst();
return new DomainProbeResult.Redirect(requestUrl.domain); try {
if (tryGet.get()) {
request = ClassicRequestBuilder.get(topUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.addHeader("Range", "bytes=0-255")
.build();
} else {
request = ClassicRequestBuilder.head(topUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.build();
}
} catch (URISyntaxException e) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid URL");
} }
return new DomainProbeResult.Ok(requestUrl);
} try {
catch (Exception ex) { var result = SendLock.wrapSend(client, request, response -> {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, ex.getMessage()); EntityUtils.consume(response.getEntity());
return switch (response.getCode()) {
case 200 -> new DomainProbeResult.Ok(url);
case 405 -> {
if (!tryGet.get()) {
tryGet.set(true);
yield new DomainProbeResult.RedirectSameDomain_Internal(url);
}
else {
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "HTTP status 405, tried HEAD and GET?!");
}
}
case 301, 302, 307 -> {
var location = response.getFirstHeader("Location");
if (location != null) {
Optional<EdgeUrl> newUrl = linkParser.parseLink(topUrl, location.getValue());
if (newUrl.isEmpty()) {
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid location header on redirect");
}
EdgeUrl newEdgeUrl = newUrl.get();
if (newEdgeUrl.domain.equals(topUrl.domain)) {
yield new DomainProbeResult.RedirectSameDomain_Internal(newEdgeUrl);
}
else {
yield new DomainProbeResult.Redirect(newEdgeUrl.domain);
}
}
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "No location header on redirect");
}
default ->
new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "HTTP status " + response.getCode());
};
});
if (result instanceof DomainProbeResult.RedirectSameDomain_Internal(EdgeUrl redirUrl)) {
urls.add(redirUrl);
}
else {
return result;
}
// We don't have robots.txt yet, so we'll assume a request delay of 1 second
TimeUnit.SECONDS.sleep(1);
}
catch (SocketTimeoutException ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Timeout during domain probe");
}
catch (Exception ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Error during domain probe");
}
} }
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Failed to resolve domain root");
} }
/** Perform a HEAD request to fetch the content type of a URL. /** Perform a HEAD request to fetch the content type of a URL.
@@ -137,66 +288,72 @@ public class HttpFetcherImpl implements HttpFetcher {
* recorded in the WARC file on failure. * recorded in the WARC file on failure.
*/ */
public ContentTypeProbeResult probeContentType(EdgeUrl url, public ContentTypeProbeResult probeContentType(EdgeUrl url,
WarcRecorder warcRecorder, CrawlDelayTimer timer,
ContentTags tags) throws RateLimitException { ContentTags tags) {
if (tags.isEmpty() && contentTypeLogic.isUrlLikeBinary(url)) { if (!tags.isEmpty() || !contentTypeLogic.isUrlLikeBinary(url)) {
var headBuilder = new Request.Builder().head() return new ContentTypeProbeResult.NoOp();
.addHeader("User-agent", userAgentString) }
.addHeader("Accept-Encoding", "gzip")
.url(url.toString()); try {
ClassicHttpRequest head = ClassicRequestBuilder.head(url.asURI())
var head = headBuilder.build(); .addHeader("User-Agent", userAgentString)
var call = client.newCall(head); .addHeader("Accept-Encoding", "gzip")
.build();
try (var rsp = call.execute()) {
var contentTypeHeader = rsp.header("Content-type"); var result = SendLock.wrapSend(client, head, (rsp) -> {
EntityUtils.consume(rsp.getEntity());
if (contentTypeHeader != null && !contentTypeLogic.isAllowableContentType(contentTypeHeader)) {
warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.code()); int statusCode = rsp.getCode();
return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.code()); // Handle redirects
} if (statusCode == 301 || statusCode == 302 || statusCode == 307) {
var location = rsp.getFirstHeader("Location");
// Update the URL to the final URL of the HEAD request, otherwise we might end up doing if (location != null) {
Optional<EdgeUrl> newUrl = linkParser.parseLink(url, location.getValue());
// HEAD 301 url1 -> url2 if (newUrl.isEmpty())
// HEAD 200 url2 return new ContentTypeProbeResult.HttpError(statusCode, "Invalid location header on redirect");
// GET 301 url1 -> url2 return new ContentTypeProbeResult.Redirect(newUrl.get());
// GET 200 url2 }
}
// which is not what we want. Overall we want to do as few requests as possible to not raise
// too many eyebrows when looking at the logs on the target server. Overall it's probably desirable if (statusCode == 405) {
// that it looks like the traffic makes sense, as opposed to looking like a broken bot. // If we get a 405, we can't probe the content type with HEAD, so we'll just say it's ok
return new ContentTypeProbeResult.Ok(url);
var redirectUrl = new EdgeUrl(rsp.request().url().toString()); }
EdgeUrl ret;
// Handle errors
if (Objects.equals(redirectUrl.domain, url.domain)) ret = redirectUrl; if (statusCode < 200 || statusCode > 300) {
else ret = url; return new ContentTypeProbeResult.HttpError(statusCode, "Bad status code");
}
// Intercept rate limiting
if (rsp.code() == 429) { // Handle missing content type
throw new HttpFetcherImpl.RateLimitException(Objects.requireNonNullElse(rsp.header("Retry-After"), "1")); var ctHeader = rsp.getFirstHeader("Content-Type");
} if (ctHeader == null) {
return new ContentTypeProbeResult.HttpError(statusCode, "Missing Content-Type header");
return new ContentTypeProbeResult.Ok(ret); }
} var contentType = ctHeader.getValue();
catch (RateLimitException ex) {
throw ex; // Check if the content type is allowed
} if (contentTypeLogic.isAllowableContentType(contentType)) {
catch (InterruptedIOException ex) { return new ContentTypeProbeResult.Ok(url);
warcRecorder.flagAsTimeout(url); } else {
return new ContentTypeProbeResult.BadContentType(contentType, statusCode);
return new ContentTypeProbeResult.Timeout(ex); }
} catch (Exception ex) { });
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
return result;
warcRecorder.flagAsError(url, ex); }
catch (SocketTimeoutException ex) {
return new ContentTypeProbeResult.Exception(ex);
} return new ContentTypeProbeResult.Timeout(ex);
}
catch (Exception ex) {
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
return new ContentTypeProbeResult.Exception(ex);
}
finally {
timer.waitFetchDelay();
} }
return new ContentTypeProbeResult.Ok(url);
} }
/** Fetch the content of a URL, and record it in a WARC file, /** Fetch the content of a URL, and record it in a WARC file,
@@ -206,35 +363,75 @@ public class HttpFetcherImpl implements HttpFetcher {
@Override @Override
public HttpFetchResult fetchContent(EdgeUrl url, public HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder warcRecorder, WarcRecorder warcRecorder,
CrawlDelayTimer timer,
ContentTags contentTags, ContentTags contentTags,
ProbeType probeType) ProbeType probeType)
throws Exception
{ {
var getBuilder = new Request.Builder().get(); try {
if (probeType == HttpFetcher.ProbeType.FULL) {
try {
var probeResult = probeContentType(url, timer, contentTags);
logger.info(crawlerAuditMarker, "Probe result {} for {}", probeResult.getClass().getSimpleName(), url);
switch (probeResult) {
case HttpFetcher.ContentTypeProbeResult.NoOp():
break; //
case HttpFetcher.ContentTypeProbeResult.Ok(EdgeUrl resolvedUrl):
url = resolvedUrl; // If we were redirected while probing, use the final URL for fetching
break;
case ContentTypeProbeResult.BadContentType badContentType:
warcRecorder.flagAsFailedContentTypeProbe(url, badContentType.contentType(), badContentType.statusCode());
return new HttpFetchResult.ResultNone();
case ContentTypeProbeResult.BadContentType.Timeout(Exception ex):
warcRecorder.flagAsTimeout(url);
return new HttpFetchResult.ResultException(ex);
case ContentTypeProbeResult.Exception(Exception ex):
warcRecorder.flagAsError(url, ex);
return new HttpFetchResult.ResultException(ex);
case ContentTypeProbeResult.HttpError httpError:
return new HttpFetchResult.ResultException(new HttpException("HTTP status code " + httpError.statusCode() + ": " + httpError.message()));
case ContentTypeProbeResult.Redirect redirect:
return new HttpFetchResult.ResultRedirect(redirect.location());
}
} catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
getBuilder.url(url.toString())
.addHeader("Accept-Encoding", "gzip")
.addHeader("Accept-Language", "en,*;q=0.5")
.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8")
.addHeader("User-agent", userAgentString);
contentTags.paint(getBuilder);
HttpFetchResult result = warcRecorder.fetch(client, getBuilder.build());
if (result instanceof HttpFetchResult.ResultOk ok) {
if (ok.statusCode() == 429) {
throw new RateLimitException(Objects.requireNonNullElse(ok.header("Retry-After"), "1"));
} }
if (ok.statusCode() == 304) {
return new HttpFetchResult.Result304Raw(); HttpGet request = new HttpGet(url.asURI());
} request.addHeader("User-Agent", userAgentString);
if (ok.statusCode() == 200) { request.addHeader("Accept-Encoding", "gzip");
return ok; request.addHeader("Accept-Language", "en,*;q=0.5");
request.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8");
contentTags.paint(request);
try (var sl = new SendLock()) {
HttpFetchResult result = warcRecorder.fetch(client, request);
if (result instanceof HttpFetchResult.ResultOk ok) {
if (ok.statusCode() == 304) {
return new HttpFetchResult.Result304Raw();
}
}
switch (result) {
case HttpFetchResult.ResultOk ok -> logger.info(crawlerAuditMarker, "Fetch result OK {} for {}", ok.statusCode(), url);
case HttpFetchResult.ResultRedirect redirect -> logger.info(crawlerAuditMarker, "Fetch result redirect: {} for {}", redirect.url(), url);
case HttpFetchResult.ResultNone none -> logger.info(crawlerAuditMarker, "Fetch result none for {}", url);
case HttpFetchResult.ResultException ex -> logger.error(crawlerAuditMarker, "Fetch result exception for " + url + ": {}", ex.ex());
case HttpFetchResult.Result304Raw raw -> logger.info(crawlerAuditMarker, "Fetch result: 304 Raw for {}", url);
case HttpFetchResult.Result304ReplacedWithReference ref -> logger.info(crawlerAuditMarker, "Fetch result: 304 With reference for {}", url);
}
return result;
} }
} }
catch (Exception ex) {
ex.printStackTrace();
return new HttpFetchResult.ResultException(ex);
}
return result;
} }
@Override @Override
@@ -242,6 +439,131 @@ public class HttpFetcherImpl implements HttpFetcher {
return new SitemapRetriever(); return new SitemapRetriever();
} }
/** Recursively fetch sitemaps */
@Override
public List<EdgeUrl> fetchSitemapUrls(String root, CrawlDelayTimer delayTimer) {
try {
List<EdgeUrl> ret = new ArrayList<>();
Set<String> seenUrls = new HashSet<>();
Set<String> seenSitemaps = new HashSet<>();
Deque<EdgeUrl> sitemapQueue = new LinkedList<>();
EdgeUrl rootSitemapUrl = new EdgeUrl(root);
sitemapQueue.add(rootSitemapUrl);
int fetchedSitemaps = 0;
while (!sitemapQueue.isEmpty() && ret.size() < 20_000 && ++fetchedSitemaps < 10) {
var head = sitemapQueue.removeFirst();
switch (fetchSingleSitemap(head)) {
case SitemapResult.SitemapUrls(List<String> urls) -> {
for (var url : urls) {
if (seenUrls.add(url)) {
EdgeUrl.parse(url)
.filter(u -> u.domain.equals(rootSitemapUrl.domain))
.ifPresent(ret::add);
}
}
}
case SitemapResult.SitemapReferences(List<String> refs) -> {
for (var ref : refs) {
if (seenSitemaps.add(ref)) {
EdgeUrl.parse(ref)
.filter(url -> url.domain.equals(rootSitemapUrl.domain))
.ifPresent(sitemapQueue::addFirst);
}
}
}
case SitemapResult.SitemapError() -> {}
}
delayTimer.waitFetchDelay();
}
return ret;
}
catch (Exception ex) {
logger.error("Error while fetching sitemaps via {}: {} ({})", root, ex.getClass().getSimpleName(), ex.getMessage());
return List.of();
}
}
private SitemapResult fetchSingleSitemap(EdgeUrl sitemapUrl) throws URISyntaxException {
HttpGet getRequest = new HttpGet(sitemapUrl.asURI());
getRequest.addHeader("User-Agent", userAgentString);
getRequest.addHeader("Accept-Encoding", "gzip");
getRequest.addHeader("Accept", "text/*, */*;q=0.9");
getRequest.addHeader("User-Agent", userAgentString);
try (var sl = new SendLock()) {
return client.execute(getRequest, response -> {
try {
if (response.getCode() != 200) {
return new SitemapResult.SitemapError();
}
Document parsedSitemap = Jsoup.parse(
EntityUtils.toString(response.getEntity()),
sitemapUrl.toString(),
Parser.xmlParser()
);
if (parsedSitemap.childrenSize() == 0) {
return new SitemapResult.SitemapError();
}
String rootTagName = parsedSitemap.child(0).tagName();
return switch (rootTagName.toLowerCase()) {
case "sitemapindex" -> {
List<String> references = new ArrayList<>();
for (var locTag : parsedSitemap.getElementsByTag("loc")) {
references.add(locTag.text().trim());
}
yield new SitemapResult.SitemapReferences(Collections.unmodifiableList(references));
}
case "urlset" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("url > loc")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
case "rss", "atom" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("link, url")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
default -> new SitemapResult.SitemapError();
};
}
finally {
EntityUtils.consume(response.getEntity());
}
});
}
catch (Exception ex) {
logger.warn("Error while fetching sitemap {}: {} ({})", sitemapUrl, ex.getClass().getSimpleName(), ex.getMessage());
return new SitemapResult.SitemapError();
}
}
private sealed interface SitemapResult {
record SitemapUrls(List<String> urls) implements SitemapResult {}
record SitemapReferences(List<String> sitemapRefs) implements SitemapResult {}
record SitemapError() implements SitemapResult {}
}
@Override @Override
public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) { public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) {
var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder); var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder);
@@ -256,15 +578,14 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) { private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) {
try { try (var sl = new SendLock()) {
var getBuilder = new Request.Builder().get();
getBuilder.url(url.toString()) HttpGet request = new HttpGet(url.asURI());
.addHeader("Accept-Encoding", "gzip") request.addHeader("User-Agent", userAgentString);
.addHeader("Accept", "text/*, */*;q=0.9") request.addHeader("Accept-Encoding", "gzip");
.addHeader("User-agent", userAgentString); request.addHeader("Accept", "text/*, */*;q=0.9");
HttpFetchResult result = recorder.fetch(client, getBuilder.build()); HttpFetchResult result = recorder.fetch(client, request);
return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) -> return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) ->
robotsParser.parseContent(url.toString(), robotsParser.parseContent(url.toString(),
@@ -278,6 +599,56 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
} }
@Override
public boolean retryRequest(HttpRequest request, IOException exception, int executionCount, HttpContext context) {
if (exception instanceof SocketTimeoutException ex) {
return false;
}
return executionCount < 3;
}
@Override
public boolean retryRequest(HttpResponse response, int executionCount, HttpContext context) {
return switch (response.getCode()) {
case 500, 503 -> executionCount < 2;
case 429 -> executionCount < 3;
default -> false;
};
}
@Override
public TimeValue getRetryInterval(HttpRequest request, IOException exception, int executionCount, HttpContext context) {
return TimeValue.ofSeconds(1);
}
@Override
public TimeValue getRetryInterval(HttpResponse response, int executionCount, HttpContext context) {
int statusCode = response.getCode();
// Give 503 a bit more time
if (statusCode == 503) return TimeValue.ofSeconds(5);
if (statusCode == 429) {
// get the Retry-After header
String retryAfter = response.getFirstHeader("Retry-After").getValue();
if (retryAfter == null) {
return TimeValue.ofSeconds(2);
}
try {
int retryAfterTime = Integer.parseInt(retryAfter);
retryAfterTime = Math.clamp(retryAfterTime, 1, 5);
return TimeValue.ofSeconds(retryAfterTime);
} catch (NumberFormatException e) {
logger.warn("Invalid Retry-After header: {}", retryAfter);
}
}
return TimeValue.ofSeconds(2);
}
public static class RateLimitException extends Exception { public static class RateLimitException extends Exception {
private final String retryAfter; private final String retryAfter;
@@ -298,5 +669,31 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
} }
} }
}
class SendLock implements AutoCloseable {
private static final Semaphore maxConcurrentRequests = new Semaphore(Integer.getInteger("crawler.maxConcurrentRequests", 512));
boolean closed = false;
public SendLock() {
maxConcurrentRequests.acquireUninterruptibly();
}
public static <T> T wrapSend(HttpClient client, final ClassicHttpRequest request,
final HttpClientResponseHandler<? extends T> responseHandler) throws IOException {
try (var lock = new SendLock()) {
return client.execute(request, responseHandler);
}
}
@Override
public void close() {
if (!closed) {
maxConcurrentRequests.release();
closed = true;
}
}
} }

View File

@@ -1,31 +0,0 @@
package nu.marginalia.crawl.fetcher.socket;
import okhttp3.Interceptor;
import okhttp3.Response;
import org.jetbrains.annotations.NotNull;
import java.io.IOException;
/** An interceptor that intercepts network requests and adds the remote IP address as
* a header in the response. This is used to pass the remote IP address to the Warc
* writer, as this information is not available in the response.
*/
public class IpInterceptingNetworkInterceptor implements Interceptor {
private static final String pseudoHeaderName = "X-Marginalia-Remote-IP";
@NotNull
@Override
public Response intercept(@NotNull Interceptor.Chain chain) throws IOException {
String IP = chain.connection().socket().getInetAddress().getHostAddress();
return chain.proceed(chain.request())
.newBuilder()
.addHeader(pseudoHeaderName, IP)
.build();
}
public static String getIpFromResponse(Response response) {
return response.header(pseudoHeaderName);
}
}

View File

@@ -27,7 +27,7 @@ public class NoSecuritySSL {
} }
}; };
public static SSLSocketFactory buildSocketFactory() { public static SSLContext buildSslContext() {
try { try {
// Install the all-trusting trust manager // Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("TLS"); final SSLContext sslContext = SSLContext.getInstance("TLS");
@@ -40,14 +40,11 @@ public class NoSecuritySSL {
clientSessionContext.setSessionCacheSize(2048); clientSessionContext.setSessionCacheSize(2048);
// Create a ssl socket factory with our all-trusting manager // Create a ssl socket factory with our all-trusting manager
return sslContext.getSocketFactory(); return sslContext;
} }
catch (Exception e) { catch (Exception e) {
throw new RuntimeException(e); throw new RuntimeException(e);
} }
} }
public static HostnameVerifier buildHostnameVerifyer() {
return (hn, session) -> true;
}
} }

View File

@@ -1,15 +1,20 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Headers; import org.apache.commons.io.IOUtils;
import okhttp3.Response;
import org.apache.commons.io.input.BOMInputStream; import org.apache.commons.io.input.BOMInputStream;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.Header;
import org.netpreserve.jwarc.WarcTruncationReason; import org.netpreserve.jwarc.WarcTruncationReason;
import java.io.*; import java.io.*;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.Objects; import java.time.Duration;
import java.util.zip.GZIPInputStream; import java.time.Instant;
import java.util.Arrays;
import static nu.marginalia.crawl.fetcher.warc.ErrorBuffer.suppressContentEncoding;
/** Input buffer for temporary storage of a HTTP response /** Input buffer for temporary storage of a HTTP response
* This may be in-memory or on-disk, at the discretion of * This may be in-memory or on-disk, at the discretion of
@@ -17,8 +22,9 @@ import java.util.zip.GZIPInputStream;
* */ * */
public abstract class WarcInputBuffer implements AutoCloseable { public abstract class WarcInputBuffer implements AutoCloseable {
protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED; protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED;
protected Headers headers; protected Header[] headers;
WarcInputBuffer(Headers headers) {
WarcInputBuffer(Header[] headers) {
this.headers = headers; this.headers = headers;
} }
@@ -30,7 +36,7 @@ public abstract class WarcInputBuffer implements AutoCloseable {
public final WarcTruncationReason truncationReason() { return truncationReason; } public final WarcTruncationReason truncationReason() { return truncationReason; }
public final Headers headers() { return headers; } public final Header[] headers() { return headers; }
/** Create a buffer for a response. /** Create a buffer for a response.
* If the response is small and not compressed, it will be stored in memory. * If the response is small and not compressed, it will be stored in memory.
@@ -38,33 +44,51 @@ public abstract class WarcInputBuffer implements AutoCloseable {
* and suppressed from the headers. * and suppressed from the headers.
* If an error occurs, a buffer will be created with no content and an error status. * If an error occurs, a buffer will be created with no content and an error status.
*/ */
static WarcInputBuffer forResponse(Response rsp) { static WarcInputBuffer forResponse(ClassicHttpResponse response,
if (rsp == null) HttpGet request,
Duration timeLimit) throws IOException {
if (response == null)
return new ErrorBuffer(); return new ErrorBuffer();
try {
String contentLengthHeader = Objects.requireNonNullElse(rsp.header("Content-Length"), "-1");
int contentLength = Integer.parseInt(contentLengthHeader);
String contentEncoding = rsp.header("Content-Encoding");
if (contentEncoding == null && contentLength > 0 && contentLength < 8192) { var entity = response.getEntity();
if (null == entity) {
return new ErrorBuffer();
}
InputStream is = entity.getContent();
long length = entity.getContentLength();
try {
if (length > 0 && length < 8192) {
// If the content is small and not compressed, we can just read it into memory // If the content is small and not compressed, we can just read it into memory
return new MemoryBuffer(rsp, contentLength); return new MemoryBuffer(response.getHeaders(), request, timeLimit, is, (int) length);
} } else {
else {
// Otherwise, we unpack it into a file and read it from there // Otherwise, we unpack it into a file and read it from there
return new FileBuffer(rsp); return new FileBuffer(response.getHeaders(), request, timeLimit, is);
} }
} }
catch (Exception ex) { finally {
return new ErrorBuffer(rsp); try {
is.skip(Long.MAX_VALUE);
}
catch (IOException e) {
// Ignore the exception
}
finally {
// Close the input stream
IOUtils.closeQuietly(is);
}
} }
} }
/** Copy an input stream to an output stream, with a maximum size and time limit */ /** Copy an input stream to an output stream, with a maximum size and time limit */
protected void copy(InputStream is, OutputStream os) { protected void copy(InputStream is, HttpGet request, OutputStream os, Duration timeLimit) {
long startTime = System.currentTimeMillis(); Instant start = Instant.now();
Instant timeout = start.plus(timeLimit);
long size = 0; long size = 0;
byte[] buffer = new byte[8192]; byte[] buffer = new byte[8192];
@@ -74,24 +98,104 @@ public abstract class WarcInputBuffer implements AutoCloseable {
while (true) { while (true) {
try { try {
Duration remaining = Duration.between(Instant.now(), timeout);
if (remaining.isNegative()) {
truncationReason = WarcTruncationReason.TIME;
// Abort the request if the time limit is exceeded
// so we don't keep the connection open forever or are forced to consume
// the stream to the end
request.abort();
break;
}
int n = is.read(buffer); int n = is.read(buffer);
if (n < 0) break; if (n < 0) break;
size += n; size += n;
os.write(buffer, 0, n);
if (size > WarcRecorder.MAX_SIZE) { // Even if we've exceeded the max length,
// we keep consuming the stream up until the end or a timeout,
// as closing the stream means resetting the connection, and
// that's generally not desirable.
if (size < WarcRecorder.MAX_SIZE) {
os.write(buffer, 0, n);
}
else if (truncationReason != WarcTruncationReason.LENGTH) {
truncationReason = WarcTruncationReason.LENGTH; truncationReason = WarcTruncationReason.LENGTH;
break; break;
} }
if (System.currentTimeMillis() - startTime > WarcRecorder.MAX_TIME) {
truncationReason = WarcTruncationReason.TIME;
break;
}
} catch (IOException e) { } catch (IOException e) {
throw new RuntimeException(e); truncationReason = WarcTruncationReason.UNSPECIFIED;
} }
} }
}
/** Takes a Content-Range header and checks if it is complete.
* A complete range is one that covers the entire resource.
* For example, "bytes 0-1023/2048" or "bytes 0-1023/*" are complete ranges.
* "bytes 0-1023/2048" is not a complete range.
*/
public boolean isRangeComplete(Header[] headers) {
// Find the Content-Range header
String contentRangeHeader = null;
for (var header : headers) {
if ("Content-Range".equalsIgnoreCase(header.getName())) {
contentRangeHeader = header.getValue();
break;
}
}
// Return true if header is null or empty
if (contentRangeHeader == null || contentRangeHeader.isEmpty()) {
return true;
}
try {
// Content-Range format: "bytes range-start-range-end/size"
// e.g., "bytes 0-1023/2048" or "bytes 0-1023/*"
// Get the part after "bytes "
String[] parts = contentRangeHeader.split(" ", 2);
if (parts.length < 2) {
return false;
}
// Get the range and size parts (e.g., "0-1023/2048")
String rangeAndSize = parts[1];
String[] rangeAndSizeParts = rangeAndSize.split("/", 2);
if (rangeAndSizeParts.length < 2) {
return false;
}
// Get the range (e.g., "0-1023")
String range = rangeAndSizeParts[0];
String[] rangeParts = range.split("-", 2);
if (rangeParts.length < 2) {
return false;
}
// Get the size (e.g., "2048" or "*")
String size = rangeAndSizeParts[1];
// If size is "*", we don't know the total size, so return false
if ("*".equals(size)) {
return false;
}
// Parse as long to handle large files
long rangeStart = Long.parseLong(rangeParts[0]);
long rangeEnd = Long.parseLong(rangeParts[1]);
long totalSize = Long.parseLong(size);
// Check if the range covers the entire resource
return rangeStart == 0 && rangeEnd == totalSize - 1;
} catch (NumberFormatException | ArrayIndexOutOfBoundsException e) {
return false;
}
} }
} }
@@ -99,12 +203,8 @@ public abstract class WarcInputBuffer implements AutoCloseable {
/** Pseudo-buffer for when we have an error */ /** Pseudo-buffer for when we have an error */
class ErrorBuffer extends WarcInputBuffer { class ErrorBuffer extends WarcInputBuffer {
public ErrorBuffer() { public ErrorBuffer() {
super(Headers.of()); super(new Header[0]);
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
public ErrorBuffer(Response rsp) {
super(rsp.headers());
truncationReason = WarcTruncationReason.UNSPECIFIED; truncationReason = WarcTruncationReason.UNSPECIFIED;
} }
@@ -120,17 +220,29 @@ class ErrorBuffer extends WarcInputBuffer {
@Override @Override
public void close() throws Exception {} public void close() throws Exception {}
static Header[] suppressContentEncoding(Header[] headers) {
return Arrays.stream(headers).filter(header -> !"Content-Encoding".equalsIgnoreCase(header.getName())).toArray(Header[]::new);
}
} }
/** Buffer for when we have the response in memory */ /** Buffer for when we have the response in memory */
class MemoryBuffer extends WarcInputBuffer { class MemoryBuffer extends WarcInputBuffer {
byte[] data; byte[] data;
public MemoryBuffer(Response response, int size) { public MemoryBuffer(Header[] headers, HttpGet request, Duration timeLimit, InputStream responseStream, int size) {
super(response.headers()); super(suppressContentEncoding(headers));
if (!isRangeComplete(headers)) {
truncationReason = WarcTruncationReason.LENGTH;
} else {
truncationReason = WarcTruncationReason.NOT_TRUNCATED;
}
var outputStream = new ByteArrayOutputStream(size); var outputStream = new ByteArrayOutputStream(size);
copy(response.body().byteStream(), outputStream); copy(responseStream, request, outputStream, timeLimit);
data = outputStream.toByteArray(); data = outputStream.toByteArray();
} }
@@ -154,53 +266,25 @@ class MemoryBuffer extends WarcInputBuffer {
class FileBuffer extends WarcInputBuffer { class FileBuffer extends WarcInputBuffer {
private final Path tempFile; private final Path tempFile;
public FileBuffer(Response response) throws IOException { public FileBuffer(Header[] headers, HttpGet request, Duration timeLimit, InputStream responseStream) throws IOException {
super(suppressContentEncoding(response.headers())); super(suppressContentEncoding(headers));
if (!isRangeComplete(headers)) {
truncationReason = WarcTruncationReason.LENGTH;
} else {
truncationReason = WarcTruncationReason.NOT_TRUNCATED;
}
this.tempFile = Files.createTempFile("rsp", ".html"); this.tempFile = Files.createTempFile("rsp", ".html");
if (response.body() == null) { try (var out = Files.newOutputStream(tempFile)) {
truncationReason = WarcTruncationReason.DISCONNECT; copy(responseStream, request, out, timeLimit);
return;
} }
catch (Exception ex) {
if ("gzip".equals(response.header("Content-Encoding"))) { truncationReason = WarcTruncationReason.UNSPECIFIED;
try (var out = Files.newOutputStream(tempFile)) {
copy(new GZIPInputStream(response.body().byteStream()), out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
}
else {
try (var out = Files.newOutputStream(tempFile)) {
copy(response.body().byteStream(), out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
} }
} }
private static Headers suppressContentEncoding(Headers headers) {
var builder = new Headers.Builder();
headers.toMultimap().forEach((k, values) -> {
if ("Content-Encoding".equalsIgnoreCase(k)) {
return;
}
if ("Transfer-Encoding".equalsIgnoreCase(k)) {
return;
}
for (var value : values) {
builder.add(k, value);
}
});
return builder.build();
}
public InputStream read() throws IOException { public InputStream read() throws IOException {
return Files.newInputStream(tempFile); return Files.newInputStream(tempFile);
} }

View File

@@ -1,11 +1,14 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Protocol;
import okhttp3.Response;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.Header;
import java.net.URI; import java.net.URI;
import java.net.URLEncoder; import java.net.URLEncoder;
import java.net.http.HttpClient;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.util.*; import java.util.*;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -16,7 +19,7 @@ import java.util.stream.Collectors;
public class WarcProtocolReconstructor { public class WarcProtocolReconstructor {
static String getHttpRequestString(String method, static String getHttpRequestString(String method,
Map<String, List<String>> mainHeaders, Header[] mainHeaders,
Map<String, List<String>> extraHeaders, Map<String, List<String>> extraHeaders,
URI uri) { URI uri) {
StringBuilder requestStringBuilder = new StringBuilder(); StringBuilder requestStringBuilder = new StringBuilder();
@@ -33,12 +36,13 @@ public class WarcProtocolReconstructor {
Set<String> addedHeaders = new HashSet<>(); Set<String> addedHeaders = new HashSet<>();
mainHeaders.forEach((k, values) -> { for (var header : mainHeaders) {
for (var value : values) { String k = header.getName();
addedHeaders.add(k); String v = header.getValue();
requestStringBuilder.append(capitalizeHeader(k)).append(": ").append(value).append("\r\n");
} addedHeaders.add(k);
}); requestStringBuilder.append(capitalizeHeader(k)).append(": ").append(v).append("\r\n");
}
extraHeaders.forEach((k, values) -> { extraHeaders.forEach((k, values) -> {
if (!addedHeaders.contains(k)) { if (!addedHeaders.contains(k)) {
@@ -75,17 +79,23 @@ public class WarcProtocolReconstructor {
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n"; return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
} }
static String getResponseHeader(Response response, long size) { static String getResponseHeader(HttpResponse<?> response, long size) {
String version = response.protocol() == Protocol.HTTP_1_1 ? "1.1" : "2.0"; String version = response.version() == HttpClient.Version.HTTP_1_1 ? "1.1" : "2.0";
String statusCode = String.valueOf(response.code()); String statusCode = String.valueOf(response.statusCode());
String statusMessage = STATUS_CODE_MAP.getOrDefault(response.code(), "Unknown"); String statusMessage = STATUS_CODE_MAP.getOrDefault(response.statusCode(), "Unknown");
String headerString = getHeadersAsString(response, size); String headerString = getHeadersAsString(response.headers(), size);
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n"; return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
} }
static String getResponseHeader(ClassicHttpResponse response, long size) {
String headerString = getHeadersAsString(response.getHeaders(), size);
return response.getVersion().format() + " " + response.getCode() + " " + response.getReasonPhrase() + "\r\n" + headerString + "\r\n\r\n";
}
private static final Map<Integer, String> STATUS_CODE_MAP = Map.ofEntries( private static final Map<Integer, String> STATUS_CODE_MAP = Map.ofEntries(
Map.entry(200, "OK"), Map.entry(200, "OK"),
Map.entry(201, "Created"), Map.entry(201, "Created"),
@@ -148,10 +158,41 @@ public class WarcProtocolReconstructor {
return joiner.toString(); return joiner.toString();
} }
static private String getHeadersAsString(Response response, long responseSize) {
static private String getHeadersAsString(Header[] headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n"); StringJoiner joiner = new StringJoiner("\r\n");
response.headers().toMultimap().forEach((k, values) -> { for (var header : headers) {
String headerCapitalized = capitalizeHeader(header.getName());
// Omit pseudoheaders injected by the crawler itself
if (headerCapitalized.startsWith("X-Marginalia"))
continue;
// Omit Transfer-Encoding and Content-Encoding headers
if (headerCapitalized.equals("Transfer-Encoding"))
continue;
if (headerCapitalized.equals("Content-Encoding"))
continue;
// Since we're transparently decoding gzip, we need to update the Content-Length header
// to reflect the actual size of the response body. We'll do this at the end.
if (headerCapitalized.equals("Content-Length"))
continue;
joiner.add(headerCapitalized + ": " + header.getValue());
}
joiner.add("Content-Length: " + responseSize);
return joiner.toString();
}
static private String getHeadersAsString(HttpHeaders headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n");
headers.map().forEach((k, values) -> {
String headerCapitalized = capitalizeHeader(k); String headerCapitalized = capitalizeHeader(k);
// Omit pseudoheaders injected by the crawler itself // Omit pseudoheaders injected by the crawler itself
@@ -179,8 +220,8 @@ public class WarcProtocolReconstructor {
return joiner.toString(); return joiner.toString();
} }
// okhttp gives us flattened headers, so we need to reconstruct Camel-Kebab-Case style // okhttp gave us flattened headers, so we need to reconstruct Camel-Kebab-Case style
// for the WARC parser's sake... // for the WARC parser's sake... (do we still need this, mr chesterton?)
static private String capitalizeHeader(String k) { static private String capitalizeHeader(String k) {
return Arrays.stream(StringUtils.split(k, '-')) return Arrays.stream(StringUtils.split(k, '-'))
.map(StringUtils::capitalize) .map(StringUtils::capitalize)

View File

@@ -1,13 +1,17 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor; import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import okhttp3.OkHttpClient; import org.apache.hc.client5.http.classic.HttpClient;
import okhttp3.Request; import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.apache.hc.client5.http.cookie.CookieStore;
import org.apache.hc.core5.http.NameValuePair;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.netpreserve.jwarc.*; import org.netpreserve.jwarc.*;
import org.slf4j.Logger; import org.slf4j.Logger;
@@ -16,18 +20,20 @@ import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.io.InputStream; import java.io.InputStream;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.SocketTimeoutException;
import java.net.URI; import java.net.URI;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
import java.time.Duration;
import java.time.Instant; import java.time.Instant;
import java.util.*; import java.util.*;
/** Based on JWarc's fetch method, APL 2.0 license /** Based on JWarc's fetch method, APL 2.0 license
* <p></p> * <p></p>
* This class wraps OkHttp's OkHttpClient and records the HTTP request and response in a WARC file, * This class wraps HttpClient and records the HTTP request and response in a WARC file,
* as best is possible given not all the data is available at the same time and needs to * as best is possible given not all the data is available at the same time and needs to
* be reconstructed. * be reconstructed.
*/ */
@@ -47,20 +53,23 @@ public class WarcRecorder implements AutoCloseable {
// Affix a version string in case we need to change the format in the future // Affix a version string in case we need to change the format in the future
// in some way // in some way
private final String warcRecorderVersion = "1.0"; private final String warcRecorderVersion = "1.0";
private final CookieStore cookies;
// We need to know if the site uses cookies so this can be reported among the search results private final LinkParser linkParser = new LinkParser();
// -- flip this to true if we see any cookies. This information will also be painted on any
// revisited pages. It's not 100% perfect and a bit order dependent, but it's good enough.
private final WarcXCookieInformationHeader cookieInformation = new WarcXCookieInformationHeader();
/** /**
* Create a new WarcRecorder that will write to the given file * Create a new WarcRecorder that will write to the given file
* *
* @param warcFile The file to write to * @param warcFile The file to write to
*/ */
public WarcRecorder(Path warcFile) throws IOException { public WarcRecorder(Path warcFile, HttpFetcherImpl fetcher) throws IOException {
this.warcFile = warcFile; this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile); this.writer = new WarcWriter(warcFile);
this.cookies = fetcher.getCookies();
}
public WarcRecorder(Path warcFile, CookieStore cookies) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = cookies;
} }
/** /**
@@ -70,112 +79,179 @@ public class WarcRecorder implements AutoCloseable {
public WarcRecorder() throws IOException { public WarcRecorder() throws IOException {
this.warcFile = Files.createTempFile("warc", ".warc.gz"); this.warcFile = Files.createTempFile("warc", ".warc.gz");
this.writer = new WarcWriter(this.warcFile); this.writer = new WarcWriter(this.warcFile);
this.cookies = new BasicCookieStore();
temporaryFile = true; temporaryFile = true;
} }
public HttpFetchResult fetch(OkHttpClient client, Request request) throws NoSuchAlgorithmException, private boolean hasCookies() {
IOException, return !cookies.getCookies().isEmpty();
URISyntaxException, }
InterruptedException
public HttpFetchResult fetch(HttpClient client,
HttpGet request)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{ {
URI requestUri = request.url().uri(); return fetch(client, request, Duration.ofMillis(MAX_TIME));
}
public HttpFetchResult fetch(HttpClient client,
HttpGet request,
Duration timeout)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{
URI requestUri = request.getUri();
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
String ip;
Instant date = Instant.now(); Instant date = Instant.now();
var call = client.newCall(request); // Not entirely sure why we need to do this, but keeping it due to Chesterton's Fence
Map<String, List<String>> extraHeaders = new HashMap<>(request.getHeaders().length);
cookieInformation.update(client, request.url()); // Inject a range header to attempt to limit the size of the response
// to the maximum size we want to store, if the server supports it.
request.addHeader("Range", "bytes=0-"+MAX_SIZE);
try (var response = call.execute(); try {
WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response)) return client.execute(request, response -> {
{
byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length); try (WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response, request, timeout);
InputStream inputStream = inputBuffer.read(); InputStream inputStream = inputBuffer.read()) {
ip = IpInterceptingNetworkInterceptor.getIpFromResponse(response); // Build and write the request
responseDataBuffer.put(responseHeaders); WarcDigestBuilder requestDigestBuilder = new WarcDigestBuilder();
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
int dataStart = responseDataBuffer.pos(); byte[] httpRequestString = WarcProtocolReconstructor
.getHttpRequestString(
request.getMethod(),
request.getHeaders(),
extraHeaders,
requestUri)
.getBytes();
for (;;) { requestDigestBuilder.update(httpRequestString);
int remainingLength = responseDataBuffer.remaining();
if (remainingLength == 0)
break;
int startPos = responseDataBuffer.pos(); WarcRequest warcRequest = new WarcRequest.Builder(requestUri)
.blockDigest(requestDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_REQUEST, httpRequestString)
.build();
int n = responseDataBuffer.readFrom(inputStream, remainingLength); warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
if (n < 0) writer.write(warcRequest);
break;
responseDataBuffer.updateDigest(responseDigestBuilder, startPos, n); if (hasCookies()) {
responseDataBuffer.updateDigest(payloadDigestBuilder, startPos, n); extraHeaders.put("X-Has-Cookies", List.of("1"));
} }
// It looks like this might be the same as requestUri, but it's not; byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
// it's the URI after resolving redirects.
final URI responseUri = response.request().url().uri();
WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri) ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length);
.blockDigest(responseDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(responseBuilder); responseDataBuffer.put(responseHeaders);
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
if (ip != null) responseBuilder.ipAddress(InetAddress.getByName(ip)); int dataStart = responseDataBuffer.pos();
responseBuilder.payloadDigest(payloadDigestBuilder.build()); for (;;) {
responseBuilder.truncated(inputBuffer.truncationReason()); int remainingLength = responseDataBuffer.remaining();
if (remainingLength == 0)
break;
// Build and write the response int startPos = responseDataBuffer.pos();
var warcResponse = responseBuilder.build(); int n = responseDataBuffer.readFrom(inputStream, remainingLength);
warcResponse.http(); // force HTTP header to be parsed before body is consumed so that caller can use it if (n < 0)
writer.write(warcResponse); break;
// Build and write the request responseDataBuffer.updateDigest(responseDigestBuilder, startPos, n);
responseDataBuffer.updateDigest(payloadDigestBuilder, startPos, n);
}
WarcDigestBuilder requestDigestBuilder = new WarcDigestBuilder(); // with some http client libraries, that resolve redirects transparently, this might be different
// from the request URI, but currently we don't have transparent redirect resolution so it's always
// the same (though let's keep the variables separate in case this changes)
final URI responseUri = requestUri;
byte[] httpRequestString = WarcProtocolReconstructor WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri)
.getHttpRequestString( .blockDigest(responseDigestBuilder.build())
response.request().method(), .date(date)
response.request().headers().toMultimap(), .concurrentTo(warcRequest.id())
request.headers().toMultimap(), .body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
requestUri)
.getBytes();
requestDigestBuilder.update(httpRequestString); InetAddress inetAddress = InetAddress.getByName(responseUri.getHost());
responseBuilder.ipAddress(inetAddress);
responseBuilder.payloadDigest(payloadDigestBuilder.build());
responseBuilder.truncated(inputBuffer.truncationReason());
WarcRequest warcRequest = new WarcRequest.Builder(requestUri) // Build and write the response
.blockDigest(requestDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_REQUEST, httpRequestString)
.concurrentTo(warcResponse.id())
.build();
warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it var warcResponse = responseBuilder.build();
writer.write(warcRequest); warcResponse.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcResponse);
return new HttpFetchResult.ResultOk(responseUri, if (Duration.between(date, Instant.now()).compareTo(Duration.ofSeconds(9)) > 0
response.code(), && inputBuffer.size() < 2048
inputBuffer.headers(), && !requestUri.getPath().endsWith("robots.txt")) // don't bail on robots.txt
ip, {
responseDataBuffer.data, // Fast detection and mitigation of crawler traps that respond with slow
dataStart, // small responses, with a high branching factor
responseDataBuffer.length() - dataStart);
} // Note we bail *after* writing the warc records, this will effectively only
catch (Exception ex) { // prevent link extraction from the document.
logger.warn("URL {} took too long to fetch ({}s) and was too small for the effort ({}b)",
requestUri,
Duration.between(date, Instant.now()).getSeconds(),
inputBuffer.size()
);
return new HttpFetchResult.ResultException(new IOException("Likely crawler trap"));
}
if (response.getCode() == 301 || response.getCode() == 302 || response.getCode() == 307) {
// If the server responds with a redirect, we need to
// update the request URI to the new location
EdgeUrl redirectLocation = Optional.ofNullable(response.getFirstHeader("Location"))
.map(NameValuePair::getValue)
.flatMap(location -> linkParser.parseLink(new EdgeUrl(requestUri), location))
.orElse(null);
if (redirectLocation != null) {
// If the redirect location is a valid URL, we need to update the request URI
return new HttpFetchResult.ResultRedirect(redirectLocation);
} else {
// If the redirect location is not a valid URL, we need to throw an exception
return new HttpFetchResult.ResultException(new IOException("Invalid redirect location: " + response.getFirstHeader("Location")));
}
}
return new HttpFetchResult.ResultOk(responseUri,
response.getCode(),
inputBuffer.headers(),
inetAddress.getHostAddress(),
responseDataBuffer.data,
dataStart,
responseDataBuffer.length() - dataStart);
} catch (Exception ex) {
ex.printStackTrace();
flagAsError(new EdgeUrl(requestUri), ex); // write a WARC record to indicate the error
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex);
}
});
// the client.execute() method will throw an exception if the request times out
// or on other IO exceptions, so we need to catch those here as well as having
// exception handling in the response handler
} catch (SocketTimeoutException ex) {
flagAsTimeout(new EdgeUrl(requestUri)); // write a WARC record to indicate the timeout
return new HttpFetchResult.ResultException(ex);
} catch (IOException ex) {
ex.printStackTrace();
flagAsError(new EdgeUrl(requestUri), ex); // write a WARC record to indicate the error
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage()); logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex); return new HttpFetchResult.ResultException(ex);
} }
@@ -185,7 +261,7 @@ public class WarcRecorder implements AutoCloseable {
writer.write(item); writer.write(item);
} }
private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags contentTags) { private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags contentTags) {
try { try {
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
@@ -195,7 +271,7 @@ public class WarcRecorder implements AutoCloseable {
if (documentBody == null) { if (documentBody == null) {
bytes = new byte[0]; bytes = new byte[0];
} else { } else {
bytes = documentBody.getBytes(); bytes = documentBody;
} }
// Create a synthesis of custom headers and the original headers // Create a synthesis of custom headers and the original headers
@@ -246,7 +322,9 @@ public class WarcRecorder implements AutoCloseable {
.date(Instant.now()) .date(Instant.now())
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes()); .body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(builder); if (hasCookies()) {
builder.addHeader("X-Has-Cookies", "1");
}
var reference = builder.build(); var reference = builder.build();
@@ -264,7 +342,7 @@ public class WarcRecorder implements AutoCloseable {
* an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this * an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this
* scenario we want to record the data as it was in the previous crawl, but not re-fetch it. * scenario we want to record the data as it was in the previous crawl, but not re-fetch it.
*/ */
public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags ctags) { public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags ctags) {
saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags); saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags);
} }
@@ -285,6 +363,9 @@ public class WarcRecorder implements AutoCloseable {
case HttpFetcherImpl.DomainProbeResult.Ok ok: case HttpFetcherImpl.DomainProbeResult.Ok ok:
fields.put("X-WARC-Probe-Status", List.of("OK")); fields.put("X-WARC-Probe-Status", List.of("OK"));
break; break;
case HttpFetcher.DomainProbeResult.RedirectSameDomain_Internal redirectSameDomain:
fields.put("X-WARC-Probe-Status", List.of("REDIR-INTERNAL"));
break;
} }
var warcinfo = new Warcinfo.Builder() var warcinfo = new Warcinfo.Builder()

View File

@@ -44,6 +44,14 @@ public class DomainLocks {
return new Semaphore(2); return new Semaphore(2);
} }
public boolean canLock(EdgeDomain domain) {
Semaphore sem = locks.get(domain.topDomain.toLowerCase());
if (null == sem)
return true;
else
return sem.availablePermits() > 0;
}
public static class DomainLock implements AutoCloseable { public static class DomainLock implements AutoCloseable {
private final String domainName; private final String domainName;
private final Semaphore semaphore; private final Semaphore semaphore;

View File

@@ -4,6 +4,7 @@ import nu.marginalia.ContentTypes;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.lsh.EasyLSH; import nu.marginalia.lsh.EasyLSH;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -11,54 +12,76 @@ import javax.annotation.Nullable;
import java.io.IOException; import java.io.IOException;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.Iterator;
import java.util.Objects;
import java.util.Optional;
/** A reference to a domain that has been crawled before. */ /** A reference to a domain that has been crawled before. */
public class CrawlDataReference implements AutoCloseable { public class CrawlDataReference implements AutoCloseable, Iterable<CrawledDocument> {
private boolean closed = false;
@Nullable
private final Path path;
@Nullable
private SerializableCrawlDataStream data = null;
private final SerializableCrawlDataStream data;
private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class); private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class);
public CrawlDataReference(SerializableCrawlDataStream data) { public CrawlDataReference(@Nullable Path path) {
this.data = data; this.path = path;
} }
public CrawlDataReference() { public CrawlDataReference() {
this(SerializableCrawlDataStream.empty()); this(null);
} }
/** Delete the associated data from disk, if it exists */ /** Delete the associated data from disk, if it exists */
public void delete() throws IOException { public void delete() throws IOException {
Path filePath = data.path(); if (path != null) {
Files.deleteIfExists(path);
if (filePath != null) {
Files.deleteIfExists(filePath);
} }
} }
/** Get the next document from the crawl data, public @NotNull Iterator<CrawledDocument> iterator() {
* returning null when there are no more documents
* available
*/
@Nullable
public CrawledDocument nextDocument() {
try {
while (data.hasNext()) {
if (data.next() instanceof CrawledDocument doc) {
if (!ContentTypes.isAccepted(doc.contentType))
continue;
return doc; requireStream();
// Guaranteed by requireStream, but helps java
Objects.requireNonNull(data);
return data.map(next -> {
if (next instanceof CrawledDocument doc && ContentTypes.isAccepted(doc.contentType)) {
return Optional.of(doc);
}
else {
return Optional.empty();
}
});
}
/** After calling this method, data is guaranteed to be non-null */
private void requireStream() {
if (closed) {
throw new IllegalStateException("Use after close()");
}
if (data == null) {
try {
if (path != null) {
data = SerializableCrawlDataStream.openDataStream(path);
return;
} }
} }
} catch (Exception ex) {
catch (IOException ex) { logger.error("Failed to open stream", ex);
logger.error("Failed to read next document", ex); }
}
return null; data = SerializableCrawlDataStream.empty();
}
} }
public static boolean isContentBodySame(String one, String other) { public static boolean isContentBodySame(byte[] one, byte[] other) {
final long contentHashOne = contentHash(one); final long contentHashOne = contentHash(one);
final long contentHashOther = contentHash(other); final long contentHashOther = contentHash(other);
@@ -66,7 +89,7 @@ public class CrawlDataReference implements AutoCloseable {
return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4; return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4;
} }
private static long contentHash(String content) { private static long contentHash(byte[] content) {
EasyLSH hash = new EasyLSH(); EasyLSH hash = new EasyLSH();
int next = 0; int next = 0;
@@ -74,8 +97,8 @@ public class CrawlDataReference implements AutoCloseable {
// In a naive best-effort fashion, extract the text // In a naive best-effort fashion, extract the text
// content of the document and feed it into the LSH // content of the document and feed it into the LSH
for (int i = 0; i < content.length(); i++) { for (byte b : content) {
char c = content.charAt(i); char c = (char) b;
if (c == '<') { if (c == '<') {
isInTag = true; isInTag = true;
} else if (c == '>') { } else if (c == '>') {
@@ -98,7 +121,12 @@ public class CrawlDataReference implements AutoCloseable {
} }
@Override @Override
public void close() throws Exception { public void close() throws IOException {
data.close(); if (!closed) {
if (data != null) {
data.close();
}
closed = true;
}
} }
} }

View File

@@ -3,6 +3,7 @@ package nu.marginalia.crawl.retreival;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import java.time.Duration; import java.time.Duration;
import java.util.concurrent.ThreadLocalRandom;
import static java.lang.Math.max; import static java.lang.Math.max;
import static java.lang.Math.min; import static java.lang.Math.min;
@@ -50,15 +51,20 @@ public class CrawlDelayTimer {
waitFetchDelay(0); waitFetchDelay(0);
} }
public void waitFetchDelay(Duration spentTime) {
waitFetchDelay(spentTime.toMillis());
}
public void waitFetchDelay(long spentTime) { public void waitFetchDelay(long spentTime) {
long sleepTime = delayTime; long sleepTime = delayTime;
long jitter = ThreadLocalRandom.current().nextLong(0, 150);
try { try {
if (sleepTime >= 1) { if (sleepTime >= 1) {
if (spentTime > sleepTime) if (spentTime > sleepTime)
return; return;
Thread.sleep(min(sleepTime - spentTime, 5000)); Thread.sleep(min(sleepTime - spentTime, 5000) + jitter);
} else { } else {
// When no crawl delay is specified, lean toward twice the fetch+process time, // When no crawl delay is specified, lean toward twice the fetch+process time,
// within sane limits. This means slower servers get slower crawling, and faster // within sane limits. This means slower servers get slower crawling, and faster
@@ -71,17 +77,17 @@ public class CrawlDelayTimer {
if (spentTime > sleepTime) if (spentTime > sleepTime)
return; return;
Thread.sleep(sleepTime - spentTime); Thread.sleep(sleepTime - spentTime + jitter);
} }
if (slowDown) { if (slowDown) {
// Additional delay when the server is signalling it wants slower requests // Additional delay when the server is signalling it wants slower requests
Thread.sleep(DEFAULT_CRAWL_DELAY_MIN_MS); Thread.sleep(DEFAULT_CRAWL_DELAY_MIN_MS + jitter);
} }
} }
catch (InterruptedException e) { catch (InterruptedException e) {
Thread.currentThread().interrupt(); Thread.currentThread().interrupt();
throw new RuntimeException(); throw new RuntimeException("Interrupted", e);
} }
} }
} }

View File

@@ -0,0 +1,42 @@
package nu.marginalia.crawl.retreival;
import java.time.Duration;
import java.time.Instant;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
/**
* This class is used to stagger the rate at which connections are created.
* <p></p>
* It is used to ensure that we do not create too many connections at once,
* which can lead to network congestion and other issues. Since the connections
* tend to be very long-lived, we can afford to wait a bit before creating the next
* even if it adds a bit of build-up time when the crawl starts.
*/
public class CrawlerConnectionThrottle {
private Instant lastCrawlStart = Instant.EPOCH;
private final Semaphore launchSemaphore = new Semaphore(1);
private final Duration launchInterval;
public CrawlerConnectionThrottle(Duration launchInterval) {
this.launchInterval = launchInterval;
}
public void waitForConnectionPermission() throws InterruptedException {
try {
launchSemaphore.acquire();
Instant nextPermittedLaunch = lastCrawlStart.plus(launchInterval);
if (nextPermittedLaunch.isAfter(Instant.now())) {
long waitTime = Duration.between(Instant.now(), nextPermittedLaunch).toMillis();
TimeUnit.MILLISECONDS.sleep(waitTime);
}
lastCrawlStart = Instant.now();
}
finally {
launchSemaphore.release();
}
}
}

View File

@@ -7,12 +7,10 @@ import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb; import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.HttpFetcher; import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.logic.LinkFilterSelector; import nu.marginalia.crawl.logic.LinkFilterSelector;
import nu.marginalia.crawl.retreival.revisit.CrawlerRevisitor; import nu.marginalia.crawl.retreival.revisit.CrawlerRevisitor;
import nu.marginalia.crawl.retreival.revisit.DocumentWithReference; import nu.marginalia.crawl.retreival.revisit.DocumentWithReference;
import nu.marginalia.crawl.retreival.sitemap.SitemapFetcher;
import nu.marginalia.ip_blocklist.UrlBlocklist; import nu.marginalia.ip_blocklist.UrlBlocklist;
import nu.marginalia.link_parser.LinkParser; import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
@@ -20,7 +18,6 @@ import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.DocumentBodyExtractor; import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.jsoup.Jsoup;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -28,14 +25,16 @@ import java.io.IOException;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.nio.file.Path; import java.nio.file.Path;
import java.time.Duration;
import java.time.Instant;
import java.util.List; import java.util.List;
import java.util.Objects;
import java.util.Optional; import java.util.Optional;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
public class CrawlerRetreiver implements AutoCloseable { public class CrawlerRetreiver implements AutoCloseable {
private static final int MAX_ERRORS = 20; private static final int MAX_ERRORS = 20;
private static final int HTTP_429_RETRY_LIMIT = 1; // Retry 429s once
private final HttpFetcher fetcher; private final HttpFetcher fetcher;
@@ -53,7 +52,10 @@ public class CrawlerRetreiver implements AutoCloseable {
private final WarcRecorder warcRecorder; private final WarcRecorder warcRecorder;
private final CrawlerRevisitor crawlerRevisitor; private final CrawlerRevisitor crawlerRevisitor;
private final SitemapFetcher sitemapFetcher; private static final CrawlerConnectionThrottle connectionThrottle = new CrawlerConnectionThrottle(
Duration.ofSeconds(1) // pace the connections to avoid network congestion at startup
);
int errorCount = 0; int errorCount = 0;
public CrawlerRetreiver(HttpFetcher fetcher, public CrawlerRetreiver(HttpFetcher fetcher,
@@ -71,7 +73,6 @@ public class CrawlerRetreiver implements AutoCloseable {
crawlFrontier = new DomainCrawlFrontier(new EdgeDomain(domain), specs.urls(), specs.crawlDepth()); crawlFrontier = new DomainCrawlFrontier(new EdgeDomain(domain), specs.urls(), specs.crawlDepth());
crawlerRevisitor = new CrawlerRevisitor(crawlFrontier, this, warcRecorder); crawlerRevisitor = new CrawlerRevisitor(crawlFrontier, this, warcRecorder);
sitemapFetcher = new SitemapFetcher(crawlFrontier, fetcher.createSitemapRetriever());
// We must always crawl the index page first, this is assumed when fingerprinting the server // We must always crawl the index page first, this is assumed when fingerprinting the server
var fst = crawlFrontier.peek(); var fst = crawlFrontier.peek();
@@ -93,30 +94,63 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
public int crawlDomain(DomainLinks domainLinks, CrawlDataReference oldCrawlData) { public int crawlDomain(DomainLinks domainLinks, CrawlDataReference oldCrawlData) {
try { try (oldCrawlData) {
// Do an initial domain probe to determine the root URL
EdgeUrl rootUrl;
// Wait for permission to open a connection to avoid network congestion
// from hundreds/thousands of TCP handshakes
connectionThrottle.waitForConnectionPermission();
// Do an initial domain probe to determine the root URL
var probeResult = probeRootUrl(); var probeResult = probeRootUrl();
switch (probeResult) {
return switch (probeResult) {
case HttpFetcher.DomainProbeResult.Ok(EdgeUrl probedUrl) -> { case HttpFetcher.DomainProbeResult.Ok(EdgeUrl probedUrl) -> {
rootUrl = probedUrl; // Good track
// Sleep after the initial probe, we don't have access to the robots.txt yet
// so we don't know the crawl delay
TimeUnit.SECONDS.sleep(1);
final SimpleRobotRules robotsRules = fetcher.fetchRobotRules(probedUrl.domain, warcRecorder);
final CrawlDelayTimer delayTimer = new CrawlDelayTimer(robotsRules.getCrawlDelay());
delayTimer.waitFetchDelay(0); // initial delay after robots.txt
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(probedUrl, delayTimer);
domainStateDb.save(summaryRecord);
if (Thread.interrupted()) {
// There's a small chance we're interrupted during the sniffing portion
throw new InterruptedException();
}
Instant recrawlStart = Instant.now();
CrawlerRevisitor.RecrawlMetadata recrawlMetadata = crawlerRevisitor.recrawl(oldCrawlData, robotsRules, delayTimer);
Duration recrawlTime = Duration.between(recrawlStart, Instant.now());
// Play back the old crawl data (if present) and fetch the documents comparing etags and last-modified
if (recrawlMetadata.size() > 0) {
// If we have reference data, we will always grow the crawl depth a bit
crawlFrontier.increaseDepth(1.5, 2500);
}
oldCrawlData.close(); // proactively close the crawl data reference here to not hold onto expensive resources
yield crawlDomain(probedUrl, robotsRules, delayTimer, domainLinks, recrawlMetadata, recrawlTime);
} }
case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> { case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString())); domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString()));
return 1; yield 1;
} }
case HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus status, String desc) -> { case HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus status, String desc) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc)); domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc));
return 1; yield 1;
} }
} default -> {
logger.error("Unexpected domain probe result {}", probeResult);
yield 1;
}
};
// Sleep after the initial probe, we don't have access to the robots.txt yet
// so we don't know the crawl delay
TimeUnit.SECONDS.sleep(1);
return crawlDomain(oldCrawlData, rootUrl, domainLinks);
} }
catch (Exception ex) { catch (Exception ex) {
logger.error("Error crawling domain {}", domain, ex); logger.error("Error crawling domain {}", domain, ex);
@@ -124,30 +158,31 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
} }
private int crawlDomain(CrawlDataReference oldCrawlData, private int crawlDomain(EdgeUrl rootUrl,
EdgeUrl rootUrl, SimpleRobotRules robotsRules,
DomainLinks domainLinks) throws InterruptedException { CrawlDelayTimer delayTimer,
DomainLinks domainLinks,
CrawlerRevisitor.RecrawlMetadata recrawlMetadata,
Duration recrawlTime) {
final SimpleRobotRules robotsRules = fetcher.fetchRobotRules(rootUrl.domain, warcRecorder); Instant crawlStart = Instant.now();
final CrawlDelayTimer delayTimer = new CrawlDelayTimer(robotsRules.getCrawlDelay());
delayTimer.waitFetchDelay(0); // initial delay after robots.txt
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(rootUrl, delayTimer);
domainStateDb.save(summaryRecord);
// Play back the old crawl data (if present) and fetch the documents comparing etags and last-modified
if (crawlerRevisitor.recrawl(oldCrawlData, robotsRules, delayTimer) > 0) {
// If we have reference data, we will always grow the crawl depth a bit
crawlFrontier.increaseDepth(1.5, 2500);
}
// Add external links to the crawl frontier // Add external links to the crawl frontier
crawlFrontier.addAllToQueue(domainLinks.getUrls(rootUrl.proto)); crawlFrontier.addAllToQueue(domainLinks.getUrls(rootUrl.proto));
// Add links from the sitemap to the crawl frontier // Fetch sitemaps
sitemapFetcher.downloadSitemaps(robotsRules, rootUrl); for (var sitemap : robotsRules.getSitemaps()) {
// Validate the sitemap URL and check if it belongs to the domain as the root URL
if (EdgeUrl.parse(sitemap)
.map(url -> url.getDomain().equals(rootUrl.domain))
.orElse(false)) {
crawlFrontier.addAllToQueue(fetcher.fetchSitemapUrls(sitemap, delayTimer));
}
}
int crawlerAdditions = 0;
while (!crawlFrontier.isEmpty() while (!crawlFrontier.isEmpty()
&& !crawlFrontier.isCrawlDepthReached() && !crawlFrontier.isCrawlDepthReached()
@@ -180,7 +215,11 @@ public class CrawlerRetreiver implements AutoCloseable {
continue; continue;
try { try {
fetchContentWithReference(top, delayTimer, DocumentWithReference.empty()); var result = fetchContentWithReference(top, delayTimer, DocumentWithReference.empty());
if (result.isOk()) {
crawlerAdditions++;
}
} }
catch (InterruptedException ex) { catch (InterruptedException ex) {
Thread.currentThread().interrupt(); Thread.currentThread().interrupt();
@@ -188,6 +227,17 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
} }
Duration crawlTime = Duration.between(crawlStart, Instant.now());
domainStateDb.save(new DomainStateDb.CrawlMeta(
domain,
Instant.now(),
recrawlTime,
crawlTime,
recrawlMetadata.errors(),
crawlerAdditions,
recrawlMetadata.size() + crawlerAdditions
));
return crawlFrontier.visitedSize(); return crawlFrontier.visitedSize();
} }
@@ -216,17 +266,29 @@ public class CrawlerRetreiver implements AutoCloseable {
return domainProbeResult; return domainProbeResult;
} }
private DomainStateDb.SummaryRecord sniffRootDocument(EdgeUrl rootUrl, CrawlDelayTimer timer) { private DomainStateDb.SummaryRecord sniffRootDocument(EdgeUrl rootUrl, CrawlDelayTimer timer) {
Optional<String> feedLink = Optional.empty(); Optional<String> feedLink = Optional.empty();
try { try {
var url = rootUrl.withPathAndParam("/", null); var url = rootUrl.withPathAndParam("/", null);
HttpFetchResult result = fetchWithRetry(url, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty()); HttpFetchResult result = fetcher.fetchContent(url, warcRecorder, timer, ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
timer.waitFetchDelay(0); timer.waitFetchDelay(0);
if (!(result instanceof HttpFetchResult.ResultOk ok)) if (result instanceof HttpFetchResult.ResultRedirect(EdgeUrl location)) {
if (Objects.equals(location.domain, url.domain)) {
// TODO: Follow the redirect to the new location and sniff the document
crawlFrontier.addFirst(location);
}
return DomainStateDb.SummaryRecord.forSuccess(domain); return DomainStateDb.SummaryRecord.forSuccess(domain);
}
if (!(result instanceof HttpFetchResult.ResultOk ok)) {
return DomainStateDb.SummaryRecord.forSuccess(domain);
}
var optDoc = ok.parseDocument(); var optDoc = ok.parseDocument();
if (optDoc.isEmpty()) if (optDoc.isEmpty())
@@ -271,18 +333,28 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
// Download the sitemap if available // Download the sitemap if available
if (feedLink.isPresent()) { feedLink.ifPresent(s -> fetcher.fetchSitemapUrls(s, timer));
sitemapFetcher.downloadSitemaps(List.of(feedLink.get()));
timer.waitFetchDelay(0);
}
// Grab the favicon if it exists // Grab the favicon if it exists
fetchWithRetry(faviconUrl, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty());
if (fetcher.fetchContent(faviconUrl, warcRecorder, timer, ContentTags.empty(), HttpFetcher.ProbeType.DISABLED) instanceof HttpFetchResult.ResultOk iconResult) {
String contentType = iconResult.header("Content-Type");
byte[] iconData = iconResult.getBodyBytes();
domainStateDb.saveIcon(
domain,
new DomainStateDb.FaviconRecord(contentType, iconData)
);
}
timer.waitFetchDelay(0); timer.waitFetchDelay(0);
} }
catch (Exception ex) { catch (Exception ex) {
logger.error("Error configuring link filter", ex); logger.error("Error configuring link filter", ex);
if (Thread.interrupted()) {
Thread.currentThread().interrupt();
return DomainStateDb.SummaryRecord.forError(domain, "Crawler Interrupted", ex.getMessage());
}
} }
finally { finally {
crawlFrontier.addVisited(rootUrl); crawlFrontier.addVisited(rootUrl);
@@ -310,7 +382,7 @@ public class CrawlerRetreiver implements AutoCloseable {
); );
private Optional<String> guessFeedUrl(CrawlDelayTimer timer) throws InterruptedException { private Optional<String> guessFeedUrl(CrawlDelayTimer timer) throws InterruptedException {
var oldDomainStateRecord = domainStateDb.get(domain); var oldDomainStateRecord = domainStateDb.getSummary(domain);
// If we are already aware of an old feed URL, then we can just revalidate it // If we are already aware of an old feed URL, then we can just revalidate it
if (oldDomainStateRecord.isPresent()) { if (oldDomainStateRecord.isPresent()) {
@@ -335,7 +407,7 @@ public class CrawlerRetreiver implements AutoCloseable {
if (parsedOpt.isEmpty()) if (parsedOpt.isEmpty())
return false; return false;
HttpFetchResult result = fetchWithRetry(parsedOpt.get(), timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty()); HttpFetchResult result = fetcher.fetchContent(parsedOpt.get(), warcRecorder, timer, ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
timer.waitFetchDelay(0); timer.waitFetchDelay(0);
if (!(result instanceof HttpFetchResult.ResultOk ok)) { if (!(result instanceof HttpFetchResult.ResultOk ok)) {
@@ -361,110 +433,63 @@ public class CrawlerRetreiver implements AutoCloseable {
CrawlDelayTimer timer, CrawlDelayTimer timer,
DocumentWithReference reference) throws InterruptedException DocumentWithReference reference) throws InterruptedException
{ {
logger.debug("Fetching {}", top);
long startTime = System.currentTimeMillis();
var contentTags = reference.getContentTags(); var contentTags = reference.getContentTags();
HttpFetchResult fetchedDoc = fetchWithRetry(top, timer, HttpFetcher.ProbeType.FULL, contentTags); HttpFetchResult fetchedDoc = fetcher.fetchContent(top, warcRecorder, timer, contentTags, HttpFetcher.ProbeType.FULL);
timer.waitFetchDelay();
if (Thread.interrupted()) {
Thread.currentThread().interrupt();
throw new InterruptedException();
}
// Parse the document and enqueue links // Parse the document and enqueue links
try { try {
if (fetchedDoc instanceof HttpFetchResult.ResultOk ok) { switch (fetchedDoc) {
var docOpt = ok.parseDocument(); case HttpFetchResult.ResultOk ok -> {
if (docOpt.isPresent()) { var docOpt = ok.parseDocument();
var doc = docOpt.get(); if (docOpt.isPresent()) {
var doc = docOpt.get();
crawlFrontier.enqueueLinksFromDocument(top, doc); var responseUrl = new EdgeUrl(ok.uri());
crawlFrontier.addVisited(new EdgeUrl(ok.uri()));
crawlFrontier.enqueueLinksFromDocument(responseUrl, doc);
crawlFrontier.addVisited(responseUrl);
}
} }
} case HttpFetchResult.Result304Raw ref when reference.doc() != null ->
else if (fetchedDoc instanceof HttpFetchResult.Result304Raw && reference.doc() != null) { {
var doc = reference.doc(); var doc = reference.doc();
warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBody, doc.headers, contentTags); warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBodyBytes, doc.headers, contentTags);
fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url, fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url,
new ContentType(doc.contentType, "UTF-8"), new ContentType(doc.contentType, "UTF-8"),
doc.documentBody); doc.documentBodyBytes);
if (doc.documentBody != null) { if (doc.documentBodyBytes != null) {
var parsed = Jsoup.parse(doc.documentBody); var parsed = doc.parseBody();
crawlFrontier.enqueueLinksFromDocument(top, parsed); crawlFrontier.enqueueLinksFromDocument(top, parsed);
crawlFrontier.addVisited(top); crawlFrontier.addVisited(top);
}
} }
} case HttpFetchResult.ResultRedirect(EdgeUrl location) -> {
else if (fetchedDoc instanceof HttpFetchResult.ResultException) { if (Objects.equals(location.domain, top.domain)) {
errorCount ++; crawlFrontier.addFirst(location);
}
}
case HttpFetchResult.ResultException ex -> errorCount++;
default -> {} // Ignore other types
} }
} }
catch (Exception ex) { catch (Exception ex) {
logger.error("Error parsing document {}", top, ex); logger.error("Error parsing document {}", top, ex);
} }
timer.waitFetchDelay(System.currentTimeMillis() - startTime);
return fetchedDoc; return fetchedDoc;
} }
/** Fetch a document and retry on 429s */
private HttpFetchResult fetchWithRetry(EdgeUrl url,
CrawlDelayTimer timer,
HttpFetcher.ProbeType probeType,
ContentTags contentTags) throws InterruptedException {
long probeStart = System.currentTimeMillis();
if (probeType == HttpFetcher.ProbeType.FULL) {
retryLoop:
for (int i = 0; i <= HTTP_429_RETRY_LIMIT; i++) {
try {
var probeResult = fetcher.probeContentType(url, warcRecorder, contentTags);
switch (probeResult) {
case HttpFetcher.ContentTypeProbeResult.Ok(EdgeUrl resolvedUrl):
url = resolvedUrl; // If we were redirected while probing, use the final URL for fetching
break retryLoop;
case HttpFetcher.ContentTypeProbeResult.BadContentType badContentType:
return new HttpFetchResult.ResultNone();
case HttpFetcher.ContentTypeProbeResult.BadContentType.Timeout timeout:
return new HttpFetchResult.ResultException(timeout.ex());
case HttpFetcher.ContentTypeProbeResult.Exception exception:
return new HttpFetchResult.ResultException(exception.ex());
default: // should be unreachable
throw new IllegalStateException("Unknown probe result");
}
}
catch (HttpFetcherImpl.RateLimitException ex) {
timer.waitRetryDelay(ex);
}
catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
}
timer.waitFetchDelay(System.currentTimeMillis() - probeStart);
}
for (int i = 0; i <= HTTP_429_RETRY_LIMIT; i++) {
try {
return fetcher.fetchContent(url, warcRecorder, contentTags, probeType);
}
catch (HttpFetcherImpl.RateLimitException ex) {
timer.waitRetryDelay(ex);
}
catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
}
return new HttpFetchResult.ResultNone();
}
private boolean isAllowedProtocol(String proto) { private boolean isAllowedProtocol(String proto) {
return proto.equalsIgnoreCase("http") return proto.equalsIgnoreCase("http")
|| proto.equalsIgnoreCase("https"); || proto.equalsIgnoreCase("https");

View File

@@ -55,6 +55,9 @@ public class DomainCrawlFrontier {
} }
} }
public EdgeDomain getDomain() {
return thisDomain;
}
/** Increase the depth of the crawl by a factor. If the current depth is smaller /** Increase the depth of the crawl by a factor. If the current depth is smaller
* than the number of already visited documents, the base depth will be adjusted * than the number of already visited documents, the base depth will be adjusted
* to the visited count first. * to the visited count first.

View File

@@ -1,6 +1,5 @@
package nu.marginalia.crawl.retreival.revisit; package nu.marginalia.crawl.retreival.revisit;
import com.google.common.base.Strings;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -11,17 +10,23 @@ import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import org.jsoup.Jsoup; import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
/** This class encapsulates the logic for re-visiting a domain that has already been crawled. /** This class encapsulates the logic for re-visiting a domain that has already been crawled.
* We may use information from the previous crawl to inform the next crawl, specifically the * We may use information from the previous crawl to inform the next crawl, specifically the
* E-Tag and Last-Modified headers. * E-Tag and Last-Modified headers.
*/ */
public class CrawlerRevisitor { public class CrawlerRevisitor {
private final DomainCrawlFrontier crawlFrontier; private final DomainCrawlFrontier crawlFrontier;
private final CrawlerRetreiver crawlerRetreiver; private final CrawlerRetreiver crawlerRetreiver;
private final WarcRecorder warcRecorder; private final WarcRecorder warcRecorder;
private static final Logger logger = LoggerFactory.getLogger(CrawlerRevisitor.class);
public CrawlerRevisitor(DomainCrawlFrontier crawlFrontier, public CrawlerRevisitor(DomainCrawlFrontier crawlFrontier,
CrawlerRetreiver crawlerRetreiver, CrawlerRetreiver crawlerRetreiver,
WarcRecorder warcRecorder) { WarcRecorder warcRecorder) {
@@ -31,7 +36,7 @@ public class CrawlerRevisitor {
} }
/** Performs a re-crawl of old documents, comparing etags and last-modified */ /** Performs a re-crawl of old documents, comparing etags and last-modified */
public int recrawl(CrawlDataReference oldCrawlData, public RecrawlMetadata recrawl(CrawlDataReference oldCrawlData,
SimpleRobotRules robotsRules, SimpleRobotRules robotsRules,
CrawlDelayTimer delayTimer) CrawlDelayTimer delayTimer)
throws InterruptedException { throws InterruptedException {
@@ -39,19 +44,18 @@ public class CrawlerRevisitor {
int retained = 0; int retained = 0;
int errors = 0; int errors = 0;
int skipped = 0; int skipped = 0;
int size = 0;
for (;;) { for (CrawledDocument doc : oldCrawlData) {
if (errors > 20) { if (errors > 20) {
// If we've had too many errors, we'll stop trying to recrawl // If we've had too many errors, we'll stop trying to recrawl
break; break;
} }
CrawledDocument doc = oldCrawlData.nextDocument(); if (Thread.interrupted()) {
throw new InterruptedException();
}
if (doc == null)
break;
// This Shouldn't Happen (TM)
var urlMaybe = EdgeUrl.parse(doc.url); var urlMaybe = EdgeUrl.parse(doc.url);
if (urlMaybe.isEmpty()) if (urlMaybe.isEmpty())
continue; continue;
@@ -70,7 +74,7 @@ public class CrawlerRevisitor {
// unlikely to produce anything meaningful for us. // unlikely to produce anything meaningful for us.
if (doc.httpStatus != 200) if (doc.httpStatus != 200)
continue; continue;
if (Strings.isNullOrEmpty(doc.documentBody)) if (!doc.hasBody())
continue; continue;
if (!crawlFrontier.filterLink(url)) if (!crawlFrontier.filterLink(url))
@@ -84,6 +88,7 @@ public class CrawlerRevisitor {
continue; continue;
} }
size++;
double skipProb; double skipProb;
@@ -117,14 +122,19 @@ public class CrawlerRevisitor {
// fashion to make sure we eventually catch changes over time // fashion to make sure we eventually catch changes over time
// and ensure we discover new links // and ensure we discover new links
// Hoover up any links from the document try {
crawlFrontier.enqueueLinksFromDocument(url, Jsoup.parse(doc.documentBody)); // Hoover up any links from the document
crawlFrontier.enqueueLinksFromDocument(url, doc.parseBody());
}
catch (IOException ex) {
//
}
// Add a WARC record so we don't repeat this // Add a WARC record so we don't repeat this
warcRecorder.writeReferenceCopy(url, warcRecorder.writeReferenceCopy(url,
doc.contentType, doc.contentType,
doc.httpStatus, doc.httpStatus,
doc.documentBody, doc.documentBodyBytes,
doc.headers, doc.headers,
new ContentTags(doc.etagMaybe, doc.lastModifiedMaybe) new ContentTags(doc.etagMaybe, doc.lastModifiedMaybe)
); );
@@ -146,11 +156,15 @@ public class CrawlerRevisitor {
else if (result instanceof HttpFetchResult.ResultException) { else if (result instanceof HttpFetchResult.ResultException) {
errors++; errors++;
} }
recrawled++; recrawled++;
} }
} }
return recrawled; logger.info("Recrawl summary {}: {} recrawled, {} retained, {} errors, {} skipped",
crawlFrontier.getDomain(), recrawled, retained, errors, skipped);
return new RecrawlMetadata(size, errors, skipped);
} }
public record RecrawlMetadata(int size, int errors, int skipped) {}
} }

View File

@@ -2,12 +2,11 @@ package nu.marginalia.crawl.retreival.revisit;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.retreival.CrawlDataReference; import nu.marginalia.crawl.retreival.CrawlDataReference;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.DocumentBodyResult;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import javax.annotation.Nullable; import javax.annotation.Nullable;
import java.util.Objects;
public record DocumentWithReference( public record DocumentWithReference(
@Nullable CrawledDocument doc, @Nullable CrawledDocument doc,
@@ -35,21 +34,31 @@ public record DocumentWithReference(
return false; return false;
if (doc == null) if (doc == null)
return false; return false;
if (doc.documentBody == null) if (doc.documentBodyBytes.length == 0) {
return false; if (doc.httpStatus < 300) {
return resultOk.bytesLength() == 0;
}
else if (doc.httpStatus == 301 || doc.httpStatus == 302 || doc.httpStatus == 307) {
@Nullable
String docLocation = doc.getHeader("Location");
@Nullable
String resultLocation = resultOk.header("Location");
if (!(DocumentBodyExtractor.asString(resultOk) instanceof DocumentBodyResult.Ok<String> bodyOk)) { return Objects.equals(docLocation, resultLocation);
return false; }
else {
return doc.httpStatus == resultOk.statusCode();
}
} }
return CrawlDataReference.isContentBodySame(doc.documentBody, bodyOk.body()); return CrawlDataReference.isContentBodySame(doc.documentBodyBytes, resultOk.bytesRaw());
} }
public ContentTags getContentTags() { public ContentTags getContentTags() {
if (null == doc) if (null == doc)
return ContentTags.empty(); return ContentTags.empty();
if (doc.documentBody == null || doc.httpStatus != 200) if (doc.documentBodyBytes.length == 0 || doc.httpStatus != 200)
return ContentTags.empty(); return ContentTags.empty();
String lastmod = doc.getLastModified(); String lastmod = doc.getLastModified();

View File

@@ -1,72 +0,0 @@
package nu.marginalia.crawl.retreival.sitemap;
import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.SitemapRetriever;
import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.HashSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
public class SitemapFetcher {
private final DomainCrawlFrontier crawlFrontier;
private final SitemapRetriever sitemapRetriever;
private static final Logger logger = LoggerFactory.getLogger(SitemapFetcher.class);
public SitemapFetcher(DomainCrawlFrontier crawlFrontier, SitemapRetriever sitemapRetriever) {
this.crawlFrontier = crawlFrontier;
this.sitemapRetriever = sitemapRetriever;
}
public void downloadSitemaps(SimpleRobotRules robotsRules, EdgeUrl rootUrl) {
List<String> urls = robotsRules.getSitemaps();
if (urls.isEmpty()) {
urls = List.of(rootUrl.withPathAndParam("/sitemap.xml", null).toString());
}
downloadSitemaps(urls);
}
public void downloadSitemaps(List<String> urls) {
Set<String> checkedSitemaps = new HashSet<>();
for (var rawUrl : urls) {
Optional<EdgeUrl> parsedUrl = EdgeUrl.parse(rawUrl);
if (parsedUrl.isEmpty()) {
continue;
}
EdgeUrl url = parsedUrl.get();
// Let's not download sitemaps from other domains for now
if (!crawlFrontier.isSameDomain(url)) {
continue;
}
if (checkedSitemaps.contains(url.path))
continue;
var sitemap = sitemapRetriever.fetchSitemap(url);
if (sitemap.isEmpty()) {
continue;
}
// ensure we don't try to download this sitemap again
// (don't move this up, as we may want to check the same
// path with different protocols until we find one that works)
checkedSitemaps.add(url.path);
crawlFrontier.addAllToQueue(sitemap);
}
logger.debug("Queue is now {}", crawlFrontier.queueSize());
}
}

View File

@@ -32,15 +32,17 @@ dependencies {
implementation libs.bundles.parquet implementation libs.bundles.parquet
implementation libs.trove implementation libs.trove
implementation libs.slop
implementation libs.jwarc implementation libs.jwarc
implementation libs.gson implementation libs.gson
implementation libs.commons.io implementation libs.commons.io
implementation libs.commons.lang3 implementation libs.commons.lang3
implementation libs.okhttp3
implementation libs.jsoup implementation libs.jsoup
implementation libs.snakeyaml implementation libs.snakeyaml
implementation libs.zstd implementation libs.zstd
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit testImplementation libs.bundles.junit
testImplementation libs.mockito testImplementation libs.mockito

View File

@@ -1,45 +0,0 @@
package nu.marginalia.io;
import nu.marginalia.io.crawldata.format.ParquetSerializableCrawlDataStream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
public class CrawledDomainReader {
private static final Logger logger = LoggerFactory.getLogger(CrawledDomainReader.class);
/** An iterator-like access to domain data This must be closed otherwise it will leak off-heap memory! */
public static SerializableCrawlDataStream createDataStream(Path fullPath) throws IOException
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
try {
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
} else {
logger.error("Unknown file type: {}", fullPath);
return SerializableCrawlDataStream.empty();
}
}
/** An iterator-like access to domain data. This must be closed otherwise it will leak off-heap memory! */
public static SerializableCrawlDataStream createDataStream(Path basePath, String domain, String id) throws IOException {
Path parquetPath = CrawlerOutputFile.getParquetPath(basePath, id, domain);
if (Files.exists(parquetPath)) {
return createDataStream(parquetPath);
}
else {
throw new FileNotFoundException("No such file: " + parquetPath);
}
}
}

View File

@@ -35,7 +35,7 @@ public class CrawlerOutputFile {
return destDir.resolve(id + "-" + filesystemSafeName(domain) + "-" + version.suffix + ".warc.gz"); return destDir.resolve(id + "-" + filesystemSafeName(domain) + "-" + version.suffix + ".warc.gz");
} }
public static Path createParquetPath(Path basePath, String id, String domain) throws IOException { public static Path createSlopPath(Path basePath, String id, String domain) throws IOException {
id = padId(id); id = padId(id);
String first = id.substring(0, 2); String first = id.substring(0, 2);
@@ -45,8 +45,9 @@ public class CrawlerOutputFile {
if (!Files.exists(destDir)) { if (!Files.exists(destDir)) {
Files.createDirectories(destDir); Files.createDirectories(destDir);
} }
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet"); return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".slop.zip");
} }
public static Path getParquetPath(Path basePath, String id, String domain) { public static Path getParquetPath(Path basePath, String id, String domain) {
id = padId(id); id = padId(id);
@@ -56,16 +57,18 @@ public class CrawlerOutputFile {
Path destDir = basePath.resolve(first).resolve(second); Path destDir = basePath.resolve(first).resolve(second);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet"); return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet");
} }
public static Path getWarcPath(Path basePath, String id, String domain, WarcFileVersion version) {
public static Path getSlopPath(Path basePath, String id, String domain) {
id = padId(id); id = padId(id);
String first = id.substring(0, 2); String first = id.substring(0, 2);
String second = id.substring(2, 4); String second = id.substring(2, 4);
Path destDir = basePath.resolve(first).resolve(second); Path destDir = basePath.resolve(first).resolve(second);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".warc" + version.suffix); return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".slop.zip");
} }
/** /**
* Pads the given ID with leading zeros to ensure it has a length of 4 characters. * Pads the given ID with leading zeros to ensure it has a length of 4 characters.
*/ */

View File

@@ -1,35 +1,122 @@
package nu.marginalia.io; package nu.marginalia.io;
import nu.marginalia.io.crawldata.format.ParquetSerializableCrawlDataStream;
import nu.marginalia.io.crawldata.format.SlopSerializableCrawlDataStream;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawledDomain; import nu.marginalia.model.crawldata.CrawledDomain;
import nu.marginalia.model.crawldata.SerializableCrawlData; import nu.marginalia.model.crawldata.SerializableCrawlData;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Iterator; import java.util.Iterator;
import java.util.List; import java.util.List;
import java.util.Optional;
import java.util.function.Function;
/** Closable iterator exceptional over serialized crawl data /** Closable iterator exceptional over serialized crawl data
* The data may appear in any order, and the iterator must be closed. * The data may appear in any order, and the iterator must be closed.
* *
* @see CrawledDomainReader
* */ * */
public interface SerializableCrawlDataStream extends AutoCloseable { public interface SerializableCrawlDataStream extends AutoCloseable {
Logger logger = LoggerFactory.getLogger(SerializableCrawlDataStream.class);
SerializableCrawlData next() throws IOException; SerializableCrawlData next() throws IOException;
/** Return a size hint for the stream. 0 is returned if the hint is not available, /** Return a size hint for the stream. 0 is returned if the hint is not available,
* or if the file is seemed too small to bother */ * or if the file is seemed too small to bother */
default int sizeHint() { return 0; } default int getSizeHint() { return 0; }
boolean hasNext() throws IOException; boolean hasNext() throws IOException;
@Nullable @Nullable
default Path path() { return null; } default Path path() { return null; }
void close() throws IOException;
/** An iterator-like access to domain data This must be closed otherwise it will leak off-heap memory! */
static SerializableCrawlDataStream openDataStream(Path fullPath) throws IOException
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".slop.zip")) {
try {
return new SlopSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
else if (fileName.endsWith(".parquet")) {
logger.error("Opening deprecated parquet-style crawl data stream", new Exception());
try {
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
logger.error("Unknown file type: {}", fullPath);
return SerializableCrawlDataStream.empty();
}
/** Get an idication of the size of the stream. This is used to determine whether to
* load the stream into memory or not. 0 is returned if the hint is not available,
* or if the file is seemed too small to bother */
static int getSizeHint(Path fullPath) {
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
return ParquetSerializableCrawlDataStream.sizeHint(fullPath);
}
else if (fileName.endsWith(".slop.zip")) {
return SlopSerializableCrawlDataStream.sizeHint(fullPath);
}
else {
return 0;
}
}
default <T> Iterator<T> map(Function<SerializableCrawlData, Optional<T>> mapper) {
return new Iterator<>() {
T next = null;
public boolean hasNext() {
if (next != null)
return true;
try {
while (SerializableCrawlDataStream.this.hasNext()) {
var val = mapper.apply(SerializableCrawlDataStream.this.next());
if (val.isPresent()) {
next = val.get();
return true;
}
}
}
catch (IOException ex) {
logger.error("Error during stream", ex);
}
return false;
}
public T next() {
if (next == null && !hasNext())
throw new IllegalStateException("No more data to read");
T ret = next;
next = null;
return ret;
}
};
}
/** For tests */ /** For tests */
default List<SerializableCrawlData> asList() throws IOException { default List<SerializableCrawlData> asList() throws IOException {
List<SerializableCrawlData> data = new ArrayList<>(); List<SerializableCrawlData> data = new ArrayList<>();
@@ -81,7 +168,6 @@ public interface SerializableCrawlDataStream extends AutoCloseable {
public boolean hasNext() { return iterator.hasNext(); } public boolean hasNext() { return iterator.hasNext(); }
public void close() {} public void close() {}
}; };
} }
} }

View File

@@ -1,7 +1,6 @@
package nu.marginalia.io.crawldata.format; package nu.marginalia.io.crawldata.format;
import nu.marginalia.contenttype.ContentType; import nu.marginalia.contenttype.ContentType;
import nu.marginalia.contenttype.DocumentBodyToString;
import nu.marginalia.hash.MurmurHash3_128; import nu.marginalia.hash.MurmurHash3_128;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
@@ -18,6 +17,7 @@ import java.nio.file.Path;
import java.util.*; import java.util.*;
import java.util.stream.Stream; import java.util.stream.Stream;
@Deprecated
public class ParquetSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream { public class ParquetSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream {
private static final Logger logger = LoggerFactory.getLogger(ParquetSerializableCrawlDataStream.class); private static final Logger logger = LoggerFactory.getLogger(ParquetSerializableCrawlDataStream.class);
@@ -40,7 +40,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
return path; return path;
} }
public int sizeHint() { public static int sizeHint(Path path) {
// Only calculate size hint for large files // Only calculate size hint for large files
// (the reason we calculate them in the first place is to assess whether it is large // (the reason we calculate them in the first place is to assess whether it is large
// because it has many documents, or because it is a small number of large documents) // because it has many documents, or because it is a small number of large documents)
@@ -124,9 +124,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
} }
else if (nextRecord.body != null) { else if (nextRecord.body != null) {
try { try {
bodyString = DocumentBodyToString.getStringData( ContentType.parse(nextRecord.contentType);
ContentType.parse(nextRecord.contentType),
nextRecord.body);
} catch (Exception ex) { } catch (Exception ex) {
logger.error("Failed to convert body to string", ex); logger.error("Failed to convert body to string", ex);
status = CrawlerDocumentStatus.BAD_CHARSET; status = CrawlerDocumentStatus.BAD_CHARSET;
@@ -147,7 +145,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
status.toString(), status.toString(),
"", "",
nextRecord.headers, nextRecord.headers,
bodyString, nextRecord.body,
// this field isn't actually used, maybe we can skip calculating it? // this field isn't actually used, maybe we can skip calculating it?
nextRecord.cookies, nextRecord.cookies,
lastModified, lastModified,

View File

@@ -0,0 +1,181 @@
package nu.marginalia.io.crawldata.format;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.crawldata.*;
import nu.marginalia.slop.SlopCrawlDataRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Deque;
import java.util.NoSuchElementException;
public class SlopSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream {
private static final Logger logger = LoggerFactory.getLogger(SlopSerializableCrawlDataStream.class);
private final SlopCrawlDataRecord.FilteringReader reader;
// Holds the next value. This is not a buffer, but to deal with the fact that
// we sometimes generate multiple SerializableCrawlData records for a single input
private final Deque<SerializableCrawlData> nextQ = new ArrayDeque<>();
private boolean wroteDomainRecord = false;
private final Path path;
public SlopSerializableCrawlDataStream(Path file) throws IOException {
path = file;
reader = new SlopCrawlDataRecord.FilteringReader(file) {
@Override
public boolean filter(String url, int status, String contentType) {
String ctLc = contentType.toLowerCase();
if (ctLc.startsWith("text/"))
return true;
else if (ctLc.startsWith("x-marginalia/"))
return true;
return false;
}
};
}
@Override
public Path path() {
return path;
}
public static int sizeHint(Path path) {
// Only calculate size hint for large files
// (the reason we calculate them in the first place is to assess whether it is large
// because it has many documents, or because it is a small number of large documents)
try {
if (Files.size(path) > 10_000_000) {
return SlopCrawlDataRecord.countGoodStatusCodes(path);
}
} catch (IOException e) {
// suppressed
}
return 0;
}
@Override
public boolean hasNext() {
try {
while (reader.hasRemaining() && nextQ.isEmpty()) {
try {
var nextRecord = reader.get();
if (!wroteDomainRecord) {
createDomainRecord(nextRecord);
wroteDomainRecord = true;
}
createDocumentRecord(nextRecord);
} catch (Exception ex) {
logger.error("Failed to create document record", ex);
}
}
return !nextQ.isEmpty();
}
catch (IOException ex) {
return false;
}
}
private void createDomainRecord(SlopCrawlDataRecord parquetRecord) throws URISyntaxException {
CrawlerDomainStatus status = CrawlerDomainStatus.OK;
String statusReason = "";
String redirectDomain = null;
// The advisory content types are used to signal various states of the crawl
// that are not actual crawled documents.
switch (parquetRecord.contentType()) {
case "x-marginalia/advisory;state=redirect" -> {
EdgeUrl crawledUrl = new EdgeUrl(parquetRecord.url());
redirectDomain = crawledUrl.getDomain().toString();
status = CrawlerDomainStatus.REDIRECT;
}
case "x-marginalia/advisory;state=blocked" -> {
status = CrawlerDomainStatus.BLOCKED;
}
case "x-marginalia/advisory;state=error" -> {
status = CrawlerDomainStatus.ERROR;
statusReason = new String(parquetRecord.body());
}
}
nextQ.add(new CrawledDomain(
parquetRecord.domain(),
redirectDomain,
status.toString(),
statusReason,
parquetRecord.ip(),
new ArrayList<>(),
new ArrayList<>()
));
}
private void createDocumentRecord(SlopCrawlDataRecord nextRecord) {
CrawlerDocumentStatus status = CrawlerDocumentStatus.OK;
if (nextRecord.contentType().startsWith("x-marginalia/advisory;state=content-type-failed-probe")) {
status = CrawlerDocumentStatus.BAD_CONTENT_TYPE;
}
else if (nextRecord.contentType().startsWith("x-marginalia/advisory;state=robots-txt-skipped")) {
status = CrawlerDocumentStatus.ROBOTS_TXT;
}
else if (nextRecord.contentType().startsWith("x-marginalia/advisory")) {
// we don't care about the other advisory content types here
return;
}
else if (nextRecord.body() != null) {
try {
ContentType.parse(nextRecord.contentType());
} catch (Exception ex) {
logger.error("Failed to convert body to string", ex);
status = CrawlerDocumentStatus.BAD_CHARSET;
}
}
else {
status = CrawlerDocumentStatus.ERROR;
}
nextQ.add(new CrawledDocument("",
nextRecord.url(),
nextRecord.contentType(),
Instant.ofEpochMilli(nextRecord.timestamp()).toString(),
nextRecord.httpStatus(),
status.toString(),
"",
nextRecord.headers(),
nextRecord.body(),
// this field isn't actually used, maybe we can skip calculating it?
nextRecord.cookies(),
null,
null));
}
public void close() throws IOException {
reader.close();
}
@Override
public SerializableCrawlData next() throws IOException {
if (!hasNext())
throw new NoSuchElementException();
return nextQ.poll();
}
}

View File

@@ -18,7 +18,7 @@ public class DocumentBodyExtractor {
return asBytes(fetchOk); return asBytes(fetchOk);
} }
else if (result instanceof HttpFetchResult.Result304ReplacedWithReference retained) { else if (result instanceof HttpFetchResult.Result304ReplacedWithReference retained) {
return new DocumentBodyResult.Ok<>(retained.contentType(), retained.body().getBytes()); return new DocumentBodyResult.Ok<>(retained.contentType(), retained.body());
} }
return new DocumentBodyResult.Error<>(CrawlerDocumentStatus.ERROR, "Fetch Result Not Ok"); return new DocumentBodyResult.Error<>(CrawlerDocumentStatus.ERROR, "Fetch Result Not Ok");

View File

@@ -1,17 +1,22 @@
package nu.marginalia.model.body; package nu.marginalia.model.body;
import nu.marginalia.contenttype.ContentType; import nu.marginalia.contenttype.ContentType;
import okhttp3.Headers; import nu.marginalia.model.EdgeUrl;
import org.apache.hc.core5.http.Header;
import org.apache.hc.core5.http.message.BasicHeader;
import org.jetbrains.annotations.Nullable;
import org.jsoup.Jsoup; import org.jsoup.Jsoup;
import org.jsoup.nodes.Document; import org.jsoup.nodes.Document;
import org.netpreserve.jwarc.MessageHeaders; import org.netpreserve.jwarc.MessageHeaders;
import org.netpreserve.jwarc.WarcResponse; import org.netpreserve.jwarc.WarcResponse;
import java.io.ByteArrayInputStream; import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream; import java.io.InputStream;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.URI; import java.net.URI;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional; import java.util.Optional;
/* FIXME: This interface has a very unfortunate name that is not very descriptive. /* FIXME: This interface has a very unfortunate name that is not very descriptive.
@@ -56,42 +61,47 @@ public sealed interface HttpFetchResult {
*/ */
record ResultOk(URI uri, record ResultOk(URI uri,
int statusCode, int statusCode,
Headers headers, Header[] headers,
String ipAddress, String ipAddress,
byte[] bytesRaw, byte[] bytesRaw, // raw data for the entire response including headers
int bytesStart, int bytesStart,
int bytesLength int bytesLength
) implements HttpFetchResult { ) implements HttpFetchResult {
public ResultOk(URI uri, int status, MessageHeaders headers, String ipAddress, byte[] bytes, int bytesStart, int length) {
this(uri, status, convertHeaders(headers), ipAddress, bytes, bytesStart, length);
}
private static Header[] convertHeaders(MessageHeaders messageHeaders) {
List<Header> headers = new ArrayList<>(12);
messageHeaders.map().forEach((k, v) -> {
if (k.isBlank()) return;
if (!Character.isAlphabetic(k.charAt(0))) return;
for (var value : v) {
headers.add(new BasicHeader(k, value));
}
});
return headers.toArray(new Header[0]);
}
public boolean isOk() { public boolean isOk() {
return statusCode >= 200 && statusCode < 300; return statusCode >= 200 && statusCode < 300;
} }
public ResultOk(URI uri,
int statusCode,
MessageHeaders headers,
String ipAddress,
byte[] bytesRaw,
int bytesStart,
int bytesLength) {
this(uri, statusCode, convertHeaders(headers), ipAddress, bytesRaw, bytesStart, bytesLength);
}
private static Headers convertHeaders(MessageHeaders headers) {
var ret = new Headers.Builder();
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
ret.add(header.getKey(), value);
}
}
return ret.build();
}
public InputStream getInputStream() { public InputStream getInputStream() {
return new ByteArrayInputStream(bytesRaw, bytesStart, bytesLength); return new ByteArrayInputStream(bytesRaw, bytesStart, bytesLength);
} }
public Optional<Document> parseDocument() throws IOException { /** Copy the byte range corresponding to the payload of the response,
Warning: Copies the data, use getInputStream() for zero copy access */
public byte[] getBodyBytes() {
return Arrays.copyOfRange(bytesRaw, bytesStart, bytesStart + bytesLength);
}
public Optional<Document> parseDocument() {
return DocumentBodyExtractor.asString(this).flatMapOpt((contentType, body) -> { return DocumentBodyExtractor.asString(this).flatMapOpt((contentType, body) -> {
if (contentType.is("text/html")) { if (contentType.is("text/html")) {
return Optional.of(Jsoup.parse(body)); return Optional.of(Jsoup.parse(body));
@@ -102,8 +112,15 @@ public sealed interface HttpFetchResult {
}); });
} }
@Nullable
public String header(String name) { public String header(String name) {
return headers.get(name); for (var header : headers) {
if (header.getName().equalsIgnoreCase(name)) {
String headerValue = header.getValue();
return headerValue;
}
}
return null;
} }
} }
@@ -114,20 +131,10 @@ public sealed interface HttpFetchResult {
* *
* @see Result304Raw for the case where the document has not yet been replaced with the reference data. * @see Result304Raw for the case where the document has not yet been replaced with the reference data.
*/ */
record Result304ReplacedWithReference(String url, ContentType contentType, String body) implements HttpFetchResult { record Result304ReplacedWithReference(String url, ContentType contentType, byte[] body) implements HttpFetchResult {
public boolean isOk() { public boolean isOk() {
return true; return true;
} }
public Optional<Document> parseDocument() {
try {
return Optional.of(Jsoup.parse(body));
}
catch (Exception ex) {
return Optional.empty();
}
}
} }
/** Fetching resulted in an exception */ /** Fetching resulted in an exception */
@@ -137,6 +144,12 @@ public sealed interface HttpFetchResult {
} }
} }
record ResultRedirect(EdgeUrl url) implements HttpFetchResult {
public boolean isOk() {
return true;
}
}
/** Fetching resulted in a HTTP 304, the remote content is identical to /** Fetching resulted in a HTTP 304, the remote content is identical to
* our reference copy. This will be replaced with a Result304ReplacedWithReference * our reference copy. This will be replaced with a Result304ReplacedWithReference
* at a later stage. * at a later stage.

View File

@@ -1,8 +1,16 @@
package nu.marginalia.model.crawldata; package nu.marginalia.model.crawldata;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.contenttype.DocumentBodyToString;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.jsoup.nodes.Document;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Objects;
public final class CrawledDocument implements SerializableCrawlData { public final class CrawledDocument implements SerializableCrawlData {
public String crawlId; public String crawlId;
@@ -19,8 +27,52 @@ public final class CrawledDocument implements SerializableCrawlData {
@Nullable @Nullable
public String headers; public String headers;
public String documentBody; public String documentBody() {
return DocumentBodyToString.getStringData(
ContentType.parse(contentType),
documentBodyBytes);
}
/** Attempt to parse the first sampleSize bytes of the document body into a string */
public String documentBody(int sampleSize) {
if (sampleSize >= documentBodyBytes.length) {
return documentBody();
}
// Truncating the string at an unlucky point *may* lead to a parsing error
// ... so we try again with a longer length
for (int i = 0; i <= 3 && sampleSize + i < documentBodyBytes.length; i++) {
try {
byte[] bytes = new byte[sampleSize + i];
System.arraycopy(documentBodyBytes, 0, bytes, 0, bytes.length);
return DocumentBodyToString.getStringData(
ContentType.parse(contentType),
bytes);
}
catch (RuntimeException ex) {
// Try again with i + 1
}
}
throw new IllegalArgumentException("Failed to parse substring");
}
public Document parseBody() throws IOException {
// Prevent stalls from parsing excessively large documents
return DocumentBodyToString.getParsedData(
ContentType.parse(contentType),
documentBodyBytes,
200_000,
url);
}
public boolean hasBody() {
return documentBodyBytes.length > 0;
}
public byte[] documentBodyBytes;
/** /**
* This is not guaranteed to be set in all versions of the format, * This is not guaranteed to be set in all versions of the format,
* information may come in CrawledDomain instead * information may come in CrawledDomain instead
@@ -30,7 +82,7 @@ public final class CrawledDocument implements SerializableCrawlData {
public String lastModifiedMaybe; public String lastModifiedMaybe;
public String etagMaybe; public String etagMaybe;
public CrawledDocument(String crawlId, String url, String contentType, String timestamp, int httpStatus, String crawlerStatus, String crawlerStatusDesc, @Nullable String headers, String documentBody, Boolean hasCookies, String lastModifiedMaybe, String etagMaybe) { public CrawledDocument(String crawlId, String url, String contentType, String timestamp, int httpStatus, String crawlerStatus, String crawlerStatusDesc, @Nullable String headers, byte[] documentBodyBytes, Boolean hasCookies, String lastModifiedMaybe, String etagMaybe) {
this.crawlId = crawlId; this.crawlId = crawlId;
this.url = url; this.url = url;
this.contentType = contentType; this.contentType = contentType;
@@ -39,7 +91,7 @@ public final class CrawledDocument implements SerializableCrawlData {
this.crawlerStatus = crawlerStatus; this.crawlerStatus = crawlerStatus;
this.crawlerStatusDesc = crawlerStatusDesc; this.crawlerStatusDesc = crawlerStatusDesc;
this.headers = headers; this.headers = headers;
this.documentBody = documentBody; this.documentBodyBytes = Objects.requireNonNullElse(documentBodyBytes, new byte[] {});
this.hasCookies = hasCookies; this.hasCookies = hasCookies;
this.lastModifiedMaybe = lastModifiedMaybe; this.lastModifiedMaybe = lastModifiedMaybe;
this.etagMaybe = etagMaybe; this.etagMaybe = etagMaybe;
@@ -50,7 +102,7 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
@Nullable @Nullable
private String getHeader(String header) { public String getHeader(String header) {
if (headers == null) { if (headers == null) {
return null; return null;
} }
@@ -106,7 +158,7 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
public String toString() { public String toString() {
return "CrawledDocument(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + this.documentBody + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")"; return "CrawledDocument(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + documentBody() + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
} }
public static class CrawledDocumentBuilder { public static class CrawledDocumentBuilder {
@@ -118,7 +170,7 @@ public final class CrawledDocument implements SerializableCrawlData {
private String crawlerStatus; private String crawlerStatus;
private String crawlerStatusDesc; private String crawlerStatusDesc;
private @Nullable String headers; private @Nullable String headers;
private String documentBody; private byte[] documentBodyBytes = new byte[0];
private String recrawlState; private String recrawlState;
private Boolean hasCookies; private Boolean hasCookies;
private String lastModifiedMaybe; private String lastModifiedMaybe;
@@ -168,10 +220,13 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
public CrawledDocumentBuilder documentBody(String documentBody) { public CrawledDocumentBuilder documentBody(String documentBody) {
this.documentBody = documentBody; this.documentBodyBytes = documentBody.getBytes(StandardCharsets.UTF_8);
return this;
}
public CrawledDocumentBuilder documentBodyBytes(byte[] documentBodyBytes) {
this.documentBodyBytes = documentBodyBytes;
return this; return this;
} }
@Deprecated @Deprecated
public CrawledDocumentBuilder recrawlState(String recrawlState) { public CrawledDocumentBuilder recrawlState(String recrawlState) {
this.recrawlState = recrawlState; this.recrawlState = recrawlState;
@@ -194,11 +249,11 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
public CrawledDocument build() { public CrawledDocument build() {
return new CrawledDocument(this.crawlId, this.url, this.contentType, this.timestamp, this.httpStatus, this.crawlerStatus, this.crawlerStatusDesc, this.headers, this.documentBody, this.hasCookies, this.lastModifiedMaybe, this.etagMaybe); return new CrawledDocument(this.crawlId, this.url, this.contentType, this.timestamp, this.httpStatus, this.crawlerStatus, this.crawlerStatusDesc, this.headers, this.documentBodyBytes, this.hasCookies, this.lastModifiedMaybe, this.etagMaybe);
} }
public String toString() { public String toString() {
return "CrawledDocument.CrawledDocumentBuilder(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + this.documentBody + ", recrawlState=" + this.recrawlState + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")"; return "CrawledDocument.CrawledDocumentBuilder(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBodyBytes=" + Arrays.toString(this.documentBodyBytes) + ", recrawlState=" + this.recrawlState + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
} }
} }
} }

View File

@@ -165,27 +165,42 @@ public class CrawledDocumentParquetRecordFileWriter implements AutoCloseable {
contentType = ""; contentType = "";
} }
String headersStr = null; boolean hasCookies = false;
String etag = null;
String lastModified = null;
StringJoiner headersStrBuilder = new StringJoiner("\n"); StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers) { for (var header : headers) {
headersStrBuilder.add(header.getFirst() + ": " + header.getSecond()); if (header.getName().equalsIgnoreCase("X-Has-Cookies")) {
hasCookies = hasCookies || header.getValue().equals("1");
}
else if (header.getName().equalsIgnoreCase("ETag")) {
etag = header.getValue();
}
else if (header.getName().equalsIgnoreCase("Last-Modified")) {
lastModified = header.getValue();
}
else {
headersStrBuilder.add(header.getName() + ": " + header.getValue());
}
} }
headersStr = headersStrBuilder.toString();
String headersStr = headersStrBuilder.toString();
write(new CrawledDocumentParquetRecord( write(new CrawledDocumentParquetRecord(
domain, domain,
response.target(), response.target(),
fetchOk.ipAddress(), fetchOk.ipAddress(),
WarcXCookieInformationHeader.hasCookies(response), hasCookies,
fetchOk.statusCode(), fetchOk.statusCode(),
response.date(), response.date(),
contentType, contentType,
bodyBytes, bodyBytes,
headersStr, headersStr,
headers.get("ETag"), etag,
headers.get("Last-Modified")) lastModified
); ));
} }

View File

@@ -0,0 +1,525 @@
package nu.marginalia.slop;
import nu.marginalia.ContentTypes;
import nu.marginalia.UserAgent;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.DocumentBodyResult;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecord;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileReader;
import nu.marginalia.slop.column.array.ByteArrayColumn;
import nu.marginalia.slop.column.primitive.ByteColumn;
import nu.marginalia.slop.column.primitive.LongColumn;
import nu.marginalia.slop.column.primitive.ShortColumn;
import nu.marginalia.slop.column.string.EnumColumn;
import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.desc.StorageType;
import nu.marginalia.slop.storage.LargeItem;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.StringUtils;
import org.netpreserve.jwarc.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URI;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.List;
import java.util.Objects;
import java.util.StringJoiner;
public record SlopCrawlDataRecord(String domain,
String url,
String ip,
boolean cookies,
int httpStatus,
long timestamp,
String contentType,
byte[] body,
String headers)
{
private static final EnumColumn domainColumn = new EnumColumn("domain", StandardCharsets.UTF_8, StorageType.ZSTD);
private static final StringColumn urlColumn = new StringColumn("url", StandardCharsets.UTF_8, StorageType.ZSTD);
private static final StringColumn ipColumn = new StringColumn("ip", StandardCharsets.ISO_8859_1, StorageType.ZSTD);
private static final ByteColumn cookiesColumn = new ByteColumn("cookies");
private static final ShortColumn statusColumn = new ShortColumn("httpStatus");
private static final LongColumn timestampColumn = new LongColumn("timestamp");
private static final EnumColumn contentTypeColumn = new EnumColumn("contentType", StandardCharsets.UTF_8);
private static final ByteArrayColumn bodyColumn = new ByteArrayColumn("body", StorageType.ZSTD);
private static final StringColumn headerColumn = new StringColumn("header", StandardCharsets.UTF_8, StorageType.ZSTD);
public SlopCrawlDataRecord(CrawledDocumentParquetRecord parquetRecord) {
this(parquetRecord.domain,
parquetRecord.url,
parquetRecord.ip,
parquetRecord.cookies,
parquetRecord.httpStatus,
parquetRecord.timestamp.toEpochMilli(),
parquetRecord.contentType,
parquetRecord.body,
parquetRecord.headers
);
}
private static SlopCrawlDataRecord forDomainRedirect(String domain, Instant date, String redirectDomain) {
return new SlopCrawlDataRecord(domain,
"https://" + redirectDomain + "/",
"",
false,
0,
date.toEpochMilli(),
"x-marginalia/advisory;state=redirect",
new byte[0],
""
);
}
private static SlopCrawlDataRecord forDomainError(String domain, Instant date, String ip, String errorStatus) {
return new SlopCrawlDataRecord(domain,
"https://" + domain + "/",
ip,
false,
0,
date.toEpochMilli(),
"x-marginalia/advisory;state=error",
errorStatus.getBytes(),
""
);
}
private static SlopCrawlDataRecord forDocError(String domain, Instant date, String url, String errorStatus) {
return new SlopCrawlDataRecord(domain,
url,
"",
false,
0,
date.toEpochMilli(),
errorStatus,
new byte[0],
""
);
}
public static void convertFromParquet(Path parquetInput, Path slopOutput) throws IOException {
Path tempDir = Files.createTempDirectory(slopOutput.getParent(), "conversion");
try (var writer = new Writer(tempDir);
var stream = CrawledDocumentParquetRecordFileReader.stream(parquetInput))
{
stream.forEach(
parquetRecord -> {
try {
writer.write(new SlopCrawlDataRecord(parquetRecord));
} catch (IOException e) {
throw new RuntimeException(e);
}
});
}
catch (IOException ex) {
FileUtils.deleteDirectory(tempDir.toFile());
throw ex;
}
try {
SlopTablePacker.packToSlopZip(tempDir, slopOutput);
FileUtils.deleteDirectory(tempDir.toFile());
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
}
private static final Logger logger = LoggerFactory.getLogger(SlopCrawlDataRecord.class);
public static void convertWarc(String domain,
UserAgent userAgent,
Path warcInputFile,
Path slopOutputFile) throws IOException {
Path tempDir = Files.createTempDirectory(slopOutputFile.getParent(), "slop-"+domain);
try (var warcReader = new WarcReader(warcInputFile);
var slopWriter = new SlopCrawlDataRecord.Writer(tempDir)
) {
WarcXResponseReference.register(warcReader);
WarcXEntityRefused.register(warcReader);
String uaString = userAgent.uaString();
for (var record : warcReader) {
try {
if (record instanceof WarcResponse response) {
// this also captures WarcXResponseReference, which inherits from WarcResponse
// and is used to store old responses from previous crawls; in this part of the logic
// we treat them the same as a normal response
if (!filterResponse(uaString, response)) {
continue;
}
slopWriter.write(domain, response);
} else if (record instanceof WarcXEntityRefused refused) {
slopWriter.write(domain, refused);
} else if (record instanceof Warcinfo warcinfo) {
slopWriter.write(warcinfo);
}
}
catch (Exception ex) {
logger.error("Failed to convert WARC record to Parquet", ex);
}
}
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
try {
SlopTablePacker.packToSlopZip(tempDir, slopOutputFile);
FileUtils.deleteDirectory(tempDir.toFile());
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
}
/** Return true if the WarcResponse should be excluded from conversion */
private static boolean filterResponse(String uaString, WarcResponse response) throws IOException {
// We don't want to store robots.txt files, as they are not
// interesting for the analysis we want to do. This is important
// since txt-files in general are interesting, and we don't want to
// exclude them as a class.
if (response.targetURI().getPath().equals("/robots.txt")) {
return false;
}
var headers = response.http().headers();
var robotsTags = headers.all("X-Robots-Tag");
if (!isXRobotsTagsPermitted(robotsTags, uaString)) {
return false;
}
// Strip out responses with content types we aren't interested in
// (though ideally we wouldn't download these at all)
String contentType = headers.first("Content-Type").orElse("text/plain").toLowerCase();
if (!ContentTypes.isAccepted(contentType)) {
return false;
}
return true;
}
/** Check X-Robots-Tag header tag to see if we are allowed to index this page.
* <p>
* Reference: <a href="https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag">https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag</a>
*
* @param xRobotsHeaderTags List of X-Robots-Tag values
* @param userAgent User agent string
* @return true if we are allowed to index this page
*/
// Visible for tests
public static boolean isXRobotsTagsPermitted(List<String> xRobotsHeaderTags, String userAgent) {
boolean isPermittedGeneral = true;
boolean isPermittedMarginalia = false;
boolean isForbiddenMarginalia = false;
for (String header : xRobotsHeaderTags) {
if (header.indexOf(':') >= 0) {
String[] parts = StringUtils.split(header, ":", 2);
if (parts.length < 2)
continue;
// Is this relevant to us?
if (!Objects.equals(parts[0].trim(), userAgent))
continue;
if (parts[1].contains("noindex"))
isForbiddenMarginalia = true;
else if (parts[1].contains("none"))
isForbiddenMarginalia = true;
else if (parts[1].contains("all"))
isPermittedMarginalia = true;
}
else {
if (header.contains("noindex"))
isPermittedGeneral = false;
if (header.contains("none"))
isPermittedGeneral = false;
}
}
if (isPermittedMarginalia)
return true;
if (isForbiddenMarginalia)
return false;
return isPermittedGeneral;
}
public static int countGoodStatusCodes(Path path) throws IOException {
int cnt = 0;
try (var table = new SlopTable(path)) {
ShortColumn.Reader statusReader = statusColumn.open(table);
while (statusReader.hasRemaining()) {
if (statusReader.get() == 200) {
cnt++;
}
}
}
return cnt;
}
public static class Writer extends SlopTable {
private final EnumColumn.Writer domainColumnWriter;
private final StringColumn.Writer urlColumnWriter;
private final StringColumn.Writer ipColumnWriter;
private final ByteColumn.Writer cookiesColumnWriter;
private final ShortColumn.Writer statusColumnWriter;
private final LongColumn.Writer timestampColumnWriter;
private final EnumColumn.Writer contentTypeColumnWriter;
private final ByteArrayColumn.Writer bodyColumnWriter;
private final StringColumn.Writer headerColumnWriter;
public Writer(Path path) throws IOException {
super(path);
domainColumnWriter = domainColumn.create(this);
urlColumnWriter = urlColumn.create(this);
ipColumnWriter = ipColumn.create(this);
cookiesColumnWriter = cookiesColumn.create(this);
statusColumnWriter = statusColumn.create(this);
timestampColumnWriter = timestampColumn.create(this);
contentTypeColumnWriter = contentTypeColumn.create(this);
bodyColumnWriter = bodyColumn.create(this);
headerColumnWriter = headerColumn.create(this);
}
public void write(SlopCrawlDataRecord record) throws IOException {
domainColumnWriter.put(record.domain);
urlColumnWriter.put(record.url);
ipColumnWriter.put(record.ip);
cookiesColumnWriter.put(record.cookies ? (byte) 1 : (byte) 0);
statusColumnWriter.put((short) record.httpStatus);
timestampColumnWriter.put(record.timestamp);
contentTypeColumnWriter.put(record.contentType);
bodyColumnWriter.put(record.body);
headerColumnWriter.put(record.headers);
}
public void write(String domain, WarcResponse response) throws IOException {
HttpFetchResult result = HttpFetchResult.importWarc(response);
if (!(result instanceof HttpFetchResult.ResultOk fetchOk)) {
return;
}
byte[] bodyBytes;
String contentType;
var body = DocumentBodyExtractor.asBytes(result);
var headers = fetchOk.headers();
if (body instanceof DocumentBodyResult.Ok<byte[]> bodyOk) {
bodyBytes = bodyOk.body();
contentType = bodyOk.contentType().toString();
}
else {
bodyBytes = new byte[0];
contentType = "";
}
boolean hasCookies = false;
String headersStr;
StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers) {
if (header.getName().equalsIgnoreCase("X-Cookies") && "1".equals(header.getValue())) {
hasCookies = true;
}
headersStrBuilder.add(header.getName() + ": " + header.getValue());
}
headersStr = headersStrBuilder.toString();
write(new SlopCrawlDataRecord(
domain,
response.target(),
fetchOk.ipAddress(),
hasCookies,
fetchOk.statusCode(),
response.date().toEpochMilli(),
contentType,
bodyBytes,
headersStr
)
);
}
private void write(String domain, WarcXEntityRefused refused) throws IOException {
URI profile = refused.profile();
String meta;
if (profile.equals(WarcXEntityRefused.documentRobotsTxtSkippedURN)) {
meta = "x-marginalia/advisory;state=robots-txt-skipped";
}
else if (profile.equals(WarcXEntityRefused.documentBadContentTypeURN)) {
meta = "x-marginalia/advisory;state=content-type-failed-probe";
}
else if (profile.equals(WarcXEntityRefused.documentProbeTimeout)) {
meta = "x-marginalia/advisory;state=timeout-probe";
}
else if (profile.equals(WarcXEntityRefused.documentUnspecifiedError)) {
meta = "x-marginalia/advisory;state=doc-error";
}
else {
meta = "x-marginalia/advisory;state=unknown";
}
write(forDocError(domain, refused.date(), refused.target(), meta));
}
private void write(Warcinfo warcinfo) throws IOException {
String selfDomain = warcinfo.fields().first("domain").orElse("");
String ip = warcinfo.fields().first("ip").orElse("");
String probeStatus = warcinfo.fields().first("X-WARC-Probe-Status").orElse("");
if (probeStatus.startsWith("REDIRECT")) {
String redirectDomain = probeStatus.substring("REDIRECT;".length());
write(forDomainRedirect(selfDomain, warcinfo.date(), redirectDomain));
}
else if (!"OK".equals(probeStatus)) {
write(forDomainError(selfDomain, warcinfo.date(), ip, probeStatus));
}
}
}
public static class Reader extends SlopTable {
private final EnumColumn.Reader domainColumnReader;
private final StringColumn.Reader urlColumnReader;
private final StringColumn.Reader ipColumnReader;
private final ByteColumn.Reader cookiesColumnReader;
private final ShortColumn.Reader statusColumnReader;
private final LongColumn.Reader timestampColumnReader;
private final EnumColumn.Reader contentTypeColumnReader;
private final ByteArrayColumn.Reader bodyColumnReader;
private final StringColumn.Reader headerColumnReader;
public Reader(Path path) throws IOException {
super(path);
domainColumnReader = domainColumn.open(this);
urlColumnReader = urlColumn.open(this);
ipColumnReader = ipColumn.open(this);
cookiesColumnReader = cookiesColumn.open(this);
statusColumnReader = statusColumn.open(this);
timestampColumnReader = timestampColumn.open(this);
contentTypeColumnReader = contentTypeColumn.open(this);
bodyColumnReader = bodyColumn.open(this);
headerColumnReader = headerColumn.open(this);
}
public SlopCrawlDataRecord get() throws IOException {
return new SlopCrawlDataRecord(
domainColumnReader.get(),
urlColumnReader.get(),
ipColumnReader.get(),
cookiesColumnReader.get() == 1,
statusColumnReader.get(),
timestampColumnReader.get(),
contentTypeColumnReader.get(),
bodyColumnReader.get(),
headerColumnReader.get()
);
}
public boolean hasRemaining() throws IOException {
return domainColumnReader.hasRemaining();
}
}
public abstract static class FilteringReader extends SlopTable {
private final EnumColumn.Reader domainColumnReader;
private final StringColumn.Reader urlColumnReader;
private final StringColumn.Reader ipColumnReader;
private final ByteColumn.Reader cookiesColumnReader;
private final ShortColumn.Reader statusColumnReader;
private final LongColumn.Reader timestampColumnReader;
private final EnumColumn.Reader contentTypeColumnReader;
private final ByteArrayColumn.Reader bodyColumnReader;
private final StringColumn.Reader headerColumnReader;
private SlopCrawlDataRecord next = null;
public FilteringReader(Path path) throws IOException {
super(path);
domainColumnReader = domainColumn.open(this);
urlColumnReader = urlColumn.open(this);
ipColumnReader = ipColumn.open(this);
cookiesColumnReader = cookiesColumn.open(this);
statusColumnReader = statusColumn.open(this);
timestampColumnReader = timestampColumn.open(this);
contentTypeColumnReader = contentTypeColumn.open(this);
bodyColumnReader = bodyColumn.open(this);
headerColumnReader = headerColumn.open(this);
}
public abstract boolean filter(String url, int status, String contentType);
public SlopCrawlDataRecord get() throws IOException {
if (next == null) {
if (!hasRemaining()) {
throw new IllegalStateException("No more values remaining");
}
}
var val = next;
next = null;
return val;
}
public boolean hasRemaining() throws IOException {
if (next != null)
return true;
while (domainColumnReader.hasRemaining()) {
String domain = domainColumnReader.get();
String url = urlColumnReader.get();
String ip = ipColumnReader.get();
boolean cookies = cookiesColumnReader.get() == 1;
int status = statusColumnReader.get();
long timestamp = timestampColumnReader.get();
String contentType = contentTypeColumnReader.get();
LargeItem<byte[]> body = bodyColumnReader.getLarge();
LargeItem<String> headers = headerColumnReader.getLarge();
if (filter(url, status, contentType)) {
next = new SlopCrawlDataRecord(
domain, url, ip, cookies, status, timestamp, contentType, body.get(), headers.get()
);
return true;
}
else {
body.close();
headers.close();
}
}
return false;
}
}
}

View File

@@ -1,35 +0,0 @@
package org.netpreserve.jwarc;
import okhttp3.HttpUrl;
import okhttp3.OkHttpClient;
/** Encapsulates out-of-band information about whether a website uses cookies,
* using a non-standard WARC header "X-Has-Cookies".
*/
public class WarcXCookieInformationHeader {
private boolean hasCookies = false;
private static final String headerName = "X-Has-Cookies";
public void update(OkHttpClient client, HttpUrl url) {
if (!hasCookies) {
hasCookies = !client.cookieJar().loadForRequest(url).isEmpty();
}
}
public boolean hasCookies() {
return hasCookies;
}
public void paint(WarcResponse.Builder builder) {
builder.addHeader(headerName, hasCookies ? "1" : "0");
}
public void paint(WarcXResponseReference.Builder builder) {
builder.addHeader(headerName, hasCookies ? "1" : "0");
}
public static boolean hasCookies(WarcRecord record) {
return record.headers().contains(headerName, "1");
}
}

View File

@@ -80,7 +80,7 @@ class CrawledDocumentParquetRecordFileWriterTest {
var document = (CrawledDocument) secondItem; var document = (CrawledDocument) secondItem;
assertEquals("https://www.marginalia.nu/", document.url); assertEquals("https://www.marginalia.nu/", document.url);
assertEquals("text/html", document.contentType); assertEquals("text/html", document.contentType);
assertEquals("hello world", document.documentBody); assertEquals("hello world", document.documentBody());
assertEquals(200, document.httpStatus); assertEquals(200, document.httpStatus);
} }
@@ -103,7 +103,7 @@ class CrawledDocumentParquetRecordFileWriterTest {
System.out.println(doc.url); System.out.println(doc.url);
System.out.println(doc.contentType); System.out.println(doc.contentType);
System.out.println(doc.httpStatus); System.out.println(doc.httpStatus);
System.out.println(doc.documentBody.length()); System.out.println(doc.documentBody().length());
} }
} }
} catch (IOException e) { } catch (IOException e) {

View File

@@ -8,9 +8,10 @@ import java.io.IOException;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.Duration;
import java.time.Instant; import java.time.Instant;
import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.*;
class DomainStateDbTest { class DomainStateDbTest {
@@ -26,7 +27,7 @@ class DomainStateDbTest {
} }
@Test @Test
public void testSunnyDay() throws SQLException { public void testSummaryRecord() throws SQLException {
try (var db = new DomainStateDb(tempFile)) { try (var db = new DomainStateDb(tempFile)) {
var allFields = new DomainStateDb.SummaryRecord( var allFields = new DomainStateDb.SummaryRecord(
"all.marginalia.nu", "all.marginalia.nu",
@@ -47,8 +48,8 @@ class DomainStateDbTest {
db.save(allFields); db.save(allFields);
db.save(minFields); db.save(minFields);
assertEquals(allFields, db.get("all.marginalia.nu").orElseThrow()); assertEquals(allFields, db.getSummary("all.marginalia.nu").orElseThrow());
assertEquals(minFields, db.get("min.marginalia.nu").orElseThrow()); assertEquals(minFields, db.getSummary("min.marginalia.nu").orElseThrow());
var updatedAllFields = new DomainStateDb.SummaryRecord( var updatedAllFields = new DomainStateDb.SummaryRecord(
"all.marginalia.nu", "all.marginalia.nu",
@@ -59,7 +60,36 @@ class DomainStateDbTest {
); );
db.save(updatedAllFields); db.save(updatedAllFields);
assertEquals(updatedAllFields, db.get("all.marginalia.nu").orElseThrow()); assertEquals(updatedAllFields, db.getSummary("all.marginalia.nu").orElseThrow());
}
}
@Test
public void testMetadata() throws SQLException {
try (var db = new DomainStateDb(tempFile)) {
var original = new DomainStateDb.CrawlMeta("example.com", Instant.ofEpochMilli(12345), Duration.ofMillis(30), Duration.ofMillis(300), 1, 2, 3);
db.save(original);
var maybeMeta = db.getMeta("example.com");
assertTrue(maybeMeta.isPresent());
assertEquals(original, maybeMeta.get());
}
}
@Test
public void testFavicon() throws SQLException {
try (var db = new DomainStateDb(tempFile)) {
db.saveIcon("www.marginalia.nu", new DomainStateDb.FaviconRecord("text/plain", "hello world".getBytes()));
var maybeData = db.getIcon("www.marginalia.nu");
assertTrue(maybeData.isPresent());
var actualData = maybeData.get();
assertEquals("text/plain", actualData.contentType());
assertArrayEquals("hello world".getBytes(), actualData.imageData());
maybeData = db.getIcon("foobar");
assertTrue(maybeData.isEmpty());
} }
} }

View File

@@ -0,0 +1,140 @@
package nu.marginalia.crawl.fetcher;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.client.WireMock;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.UserAgent;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeUrl;
import org.junit.jupiter.api.*;
import java.io.IOException;
import java.net.URISyntaxException;
@Tag("slow")
class HttpFetcherImplContentTypeProbeTest {
private HttpFetcherImpl fetcher;
private static WireMockServer wireMockServer;
private static EdgeUrl timeoutUrl;
private static EdgeUrl contentTypeHtmlUrl;
private static EdgeUrl contentTypeBinaryUrl;
private static EdgeUrl redirectUrl;
private static EdgeUrl badHttpStatusUrl;
private static EdgeUrl onlyGetAllowedUrl;
@BeforeAll
public static void setupAll() throws URISyntaxException {
wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
timeoutUrl = new EdgeUrl("http://localhost:18089/timeout.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(timeoutUrl.path))
.willReturn(WireMock.aResponse()
.withFixedDelay(15000))); // 10 seconds delay to simulate timeout
contentTypeHtmlUrl = new EdgeUrl("http://localhost:18089/test.html.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(contentTypeHtmlUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)));
contentTypeBinaryUrl = new EdgeUrl("http://localhost:18089/test.bad.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(contentTypeBinaryUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "application/octet-stream")
.withStatus(200)));
redirectUrl = new EdgeUrl("http://localhost:18089/redirect.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(redirectUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Location", "http://localhost:18089/test.html.bin")
.withStatus(301)));
badHttpStatusUrl = new EdgeUrl("http://localhost:18089/badstatus.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(badHttpStatusUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(500)));
onlyGetAllowedUrl = new EdgeUrl("http://localhost:18089/onlyget.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(onlyGetAllowedUrl.path))
.willReturn(WireMock.aResponse()
.withStatus(405))); // Method Not Allowed
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(onlyGetAllowedUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)));
wireMockServer.start();
}
@AfterAll
public static void tearDownAll() {
wireMockServer.stop();
}
@BeforeEach
public void setUp() {
fetcher = new HttpFetcherImpl(new UserAgent("test.marginalia.nu", "test.marginalia.nu"));
}
@AfterEach
public void tearDown() throws IOException {
fetcher.close();
}
@Test
public void testProbeContentTypeHtmlShortcircuitPath() throws URISyntaxException {
var result = fetcher.probeContentType(new EdgeUrl("https://localhost/test.html"), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertInstanceOf(HttpFetcher.ContentTypeProbeResult.NoOp.class, result);
}
@Test
public void testProbeContentTypeHtmlShortcircuitTags() {
var result = fetcher.probeContentType(contentTypeBinaryUrl, new CrawlDelayTimer(50), new ContentTags("a", "b"));
Assertions.assertInstanceOf(HttpFetcher.ContentTypeProbeResult.NoOp.class, result);
}
@Test
public void testProbeContentTypeHtml() {
var result = fetcher.probeContentType(contentTypeHtmlUrl, new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.Ok(contentTypeHtmlUrl), result);
}
@Test
public void testProbeContentTypeBinary() {
var result = fetcher.probeContentType(contentTypeBinaryUrl, new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.BadContentType("application/octet-stream", 200), result);
}
@Test
public void testProbeContentTypeRedirect() {
var result = fetcher.probeContentType(redirectUrl, new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.Redirect(contentTypeHtmlUrl), result);
}
@Test
public void testProbeContentTypeBadHttpStatus() {
var result = fetcher.probeContentType(badHttpStatusUrl, new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.HttpError(500, "Bad status code"), result);
}
@Test
public void testOnlyGetAllowed() {
var result = fetcher.probeContentType(onlyGetAllowedUrl, new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.Ok(onlyGetAllowedUrl), result);
}
@Test
public void testTimeout() {
var result = fetcher.probeContentType(timeoutUrl, new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertInstanceOf(HttpFetcher.ContentTypeProbeResult.Timeout.class, result);
}
}

View File

@@ -0,0 +1,89 @@
package nu.marginalia.crawl.fetcher;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.client.WireMock;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.UserAgent;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.junit.jupiter.api.*;
import java.io.IOException;
import java.net.URISyntaxException;
@Tag("slow")
class HttpFetcherImplDomainProbeTest {
private HttpFetcherImpl fetcher;
private static WireMockServer wireMockServer;
private static EdgeUrl timeoutUrl;
@BeforeAll
public static void setupAll() throws URISyntaxException {
wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo("/timeout"))
.willReturn(WireMock.aResponse()
.withFixedDelay(15000))); // 10 seconds delay to simulate timeout
wireMockServer.start();
timeoutUrl = new EdgeUrl("http://localhost:18089/timeout");
}
@AfterAll
public static void tearDownAll() {
wireMockServer.stop();
}
@BeforeEach
public void setUp() {
fetcher = new HttpFetcherImpl(new UserAgent("test.marginalia.nu", "test.marginalia.nu"));
}
@AfterEach
public void tearDown() throws IOException {
fetcher.close();
}
@Test
public void testProbeDomain() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("https://www.marginalia.nu/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Ok(new EdgeUrl("https://www.marginalia.nu/")), result);
}
@Test
public void testProbeDomainProtoUpgrade() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("http://www.marginalia.nu/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Ok(new EdgeUrl("https://www.marginalia.nu/")), result);
}
@Test
public void testProbeDomainRedirect() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("http://search.marginalia.nu/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Redirect(new EdgeDomain("marginalia-search.com")), result);
}
@Test
public void testProbeDomainOnlyGET() throws URISyntaxException {
// This test is to check if the domain probe only allows GET requests
var result = fetcher.probeDomain(new EdgeUrl("https://marginalia-search.com/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Ok(new EdgeUrl("https://marginalia-search.com/")), result);
}
@Test
public void testProbeDomainError() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("https://invalid.example.com/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Error during domain probe"), result);
}
@Test
public void testProbeDomainTimeout() throws URISyntaxException {
var result = fetcher.probeDomain(timeoutUrl);
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Timeout during domain probe"), result);
}
}

Some files were not shown because too many files have changed in this diff Show More