1
1
mirror of https://github.com/MarginaliaSearch/MarginaliaSearch.git synced 2025-10-06 07:32:38 +02:00

Compare commits

...

161 Commits

Author SHA1 Message Date
Viktor Lofgren
500f63e921 (crawler) Lower max conn per route 2025-04-17 18:36:16 +02:00
Viktor Lofgren
6dfbedda1e (crawler) Increase max conn per route and connection timeout 2025-04-17 18:31:46 +02:00
Viktor Lofgren
9715ddb105 (crawler) Increase max pool size to a large value 2025-04-17 18:22:58 +02:00
Viktor Lofgren
1fc6313a77 (crawler) Remove log noise when retrying a bad URL 2025-04-17 17:10:46 +02:00
Viktor Lofgren
b1249d5b8a (crawler) Fix broken test. 2025-04-17 17:01:42 +02:00
Viktor
ef95d59b07 Merge pull request #161 from MarginaliaSearch/apache-httpclient-in-crawler
The previously used Java HttpClient seems unsuitable for crawler usage, that lead to issues like send()-operations sometimes hanging forever, with clunky workarounds such as running each send operation in a separate Future that can be cancelled on a timeout.

The most damning flaw is that it does not offer socket timeouts. If a server responds in a timely manner, but for some reason between high load or malice stops sending data, Java's builtin HttpClient will hang forever.

It simply has too many assumptions that break, and fails to adequately expose the inner workings of the connection pool to a degree that makes it possible to configure in a satisfactory manner, such as setting a SO_LINGER value or limiting the number of concurrent connections to a host.

Apache's HttpClient solves all these problems.

The change also includes a new battery of tests for the HttpFetcher, and refactors the retriever class a bit to move stuff into the HttpFetcher, leading to a better separation of concerns.

The crawler will also be a bit more clever when fetching documents, and attempt to use range queries where supported to limit the number of bytes, as interrupting connections is undesirable and leads to connection storms and bufferbloat.
2025-04-17 16:57:19 +02:00
Viktor Lofgren
acdd8664f5 (crawler) More logging for the crawler, in a separate file. 2025-04-17 16:55:50 +02:00
Viktor Lofgren
6b12eac58a (crawler) Fix crawler retriever test to use the slop format 2025-04-17 16:35:13 +02:00
Viktor Lofgren
bb3f1f395a (crawler) Fix bug where headers were not stored correctly
This was the result of refactoring to Apache HttpClient.
2025-04-17 16:34:41 +02:00
Viktor Lofgren
b661beef41 (crawler) Amend recrawl logic to match redirects as being unchanged if their Location is the same. 2025-04-17 16:34:05 +02:00
Viktor Lofgren
9888c47f19 (crawler) Add custom Keep-Alive settings for HttpClient with max keep-alive of 30s 2025-04-17 15:25:46 +02:00
Viktor Lofgren
dcef7e955b (crawler) Try to avoid unnecessary connection resets
In order to keep connections alive, the crawler will consume data past it's max size (but hope and pray the server supports range queries) as long as we've not exceeded the timeout.

This permits us to keep the connection alive in more scenarios, which is helpful for the health of the network stack, as constant TCP handshakes can lead to quite a lot of buffer bloat.

This will increase the bandwidth requirements in some scenarios, but on the other hand, it will increase the available bandwidth as well.
2025-04-17 14:51:33 +02:00
Viktor Lofgren
b3973a1dd7 (crawler) Remove unnecessary crawl delay when not ct-probing
The crawler would *always* incur the crawl delay penalty associated with content type probing, even if it wasn't actually probing.  Removing this delay when we are not probing.
2025-04-17 14:39:04 +02:00
Viktor Lofgren
8bd05d6d90 (crawler) Attempt to use range queries where available
This might help in some circumstances to avoid fetching more data than we are interested in.
2025-04-17 14:37:55 +02:00
Viktor Lofgren
59df8e356e (crawler) Do not fail domain and content type probe on 405
Some endpoints do not support the HEAD method.  This has historically broken the crawler when it attempts to use HEAD to probe certain URLs that are suspected of being e.g. binary.

The change makes it so that we bypass the probing on 405 instead, and for the domain probe logic, we switch to a small range queried GET.
2025-04-17 13:54:28 +02:00
Viktor Lofgren
7161162a35 (crawler) Write WARC records in a sane order 2025-04-17 13:36:39 +02:00
Viktor Lofgren
d7c4c5141f (crawler) Migrate to Apache HttpClient for crawler
The previously used Java HttpClient seems unsuitable for crawler usage,
that lead to issues like send()-operations sometimes hanging forever,
with clunky workarounds such as running each send operation in a separate
Future that can be cancelled on a timeout.

It has too many assumptions that break, and fails to adequately expose
the inner workings of the connection pool to a degree that makes it possible
to configure in a satisfactory manner.

Apache's HttpClient solves all these problems.

The change also includes a new battery of tests for the HttpFetcher,
and refactors the retriever class a bit to move stuff into the HttpFetcher,
leading to a better separation of concerns.
2025-04-17 12:51:08 +02:00
Viktor Lofgren
88e9b8fb05 (crawler) Throttle the establishment of new connections
To avoid network congestion from the packet storm created when establishing hundreds or thousands of connections at the same time, pace the opening of new connections.
2025-04-08 22:53:02 +02:00
Viktor Lofgren
b6265cee11 (feeds) Add timeout code to send()
Due to the unique way java's HttpClient implements timeouts, we must always wrap it in an executor to catch the scenario that a server stops sending data mid-response, which would otherwise hang the send method forever.
2025-04-08 22:09:59 +02:00
Viktor Lofgren
c91af247e9 (rate-limit) Fix rate limiting logic
The rate limiter was misconfigured to regenerate tokens at a fixed rate of 1 per refillRate; not refillRate per minute.  Additionally increasing the default bucket size to 4x refill rate.
2025-04-05 12:26:26 +02:00
Viktor Lofgren
7a31227de1 (crawler) Filter out robots.txt-sitemaps that belong to different domains 2025-04-02 13:35:37 +02:00
Viktor Lofgren
4f477604c5 (crawler) Improve error handling in parquet->slop conversion
Parquet code throws a RuntimeException, which was not correctly caught, leading to a failure to crawl.
2025-04-02 13:16:01 +02:00
Viktor Lofgren
2970f4395b (minor) Test code cleanup 2025-04-02 13:16:01 +02:00
Viktor Lofgren
d1ec909b36 (crawler) Improve handling of timeouts to prevent crawler from getting stuck 2025-04-02 12:57:21 +02:00
Viktor Lofgren
c67c5bbf42 (crawler) Experimentally drop to HTTP 1.1 for crawler to see if this solves stuck send()s 2025-04-01 12:05:21 +02:00
Viktor Lofgren
ecb0e57a1a (crawler) Make the use of virtual threads in the crawler configurable via system properties 2025-03-27 21:26:05 +01:00
Viktor Lofgren
8c61f61b46 (crawler) Add crawling metadata to domainstate db 2025-03-27 16:38:37 +01:00
Viktor Lofgren
662a18c933 Revert "(crawler) Further rearrange crawl order"
This reverts commit 1c2426a052.

The change does not appear necessary to avoid problems.
2025-03-27 11:25:08 +01:00
Viktor Lofgren
1c2426a052 (crawler) Further rearrange crawl order
Limit crawl order preferrence to edu domains, to avoid hitting stuff like medium and wordpress with shotgun requests.
2025-03-27 11:19:20 +01:00
Viktor Lofgren
34df7441ac (crawler) Add some jitter to crawl delay to avoid accidentally synchronized requests 2025-03-27 11:15:16 +01:00
Viktor Lofgren
5387e2bd80 (crawler) Adjust crawl order to get a better mixture of domains 2025-03-27 11:12:48 +01:00
Viktor Lofgren
0f3b24d0f8 (crawler) Evaluate virtual threads for the crawler
The change also alters SimpleBlockingThreadPool to add the option to use virtual threads instead of platform threads.
2025-03-27 11:02:21 +01:00
Viktor Lofgren
a732095d2a (crawler) Improve crawl task ordering
Further improve the ordering of the crawl tasks in order to ensure that potentially blocking tasks are enqueued as soon as possible.
2025-03-26 16:51:37 +01:00
Viktor Lofgren
6607f0112f (crawler) Improve how the crawler deals with interruptions
In some cases, it threads would previously fail to terminate when interrupted.
2025-03-26 16:19:57 +01:00
Viktor Lofgren
4913730de9 (jdk) Upgrade to Java 24 2025-03-26 13:26:06 +01:00
Viktor Lofgren
1db64f9d56 (chore) Fix zookeeper test by upgrading zk image version.
Test suddenly broke due to the increasing entropy of the universe.
2025-03-26 11:47:14 +01:00
Viktor Lofgren
4dcff14498 (search) Improve contrast with light mode 2025-03-25 13:15:31 +01:00
Viktor Lofgren
426658f64e (search) Improve contrast with light mode 2025-03-25 11:54:54 +01:00
Viktor Lofgren
2181b22f05 (crawler) Change default maxConcurrentRequests to 512
This seems like a more sensible default after testing a bit.  May need local tuning.
2025-03-22 12:11:09 +01:00
Viktor Lofgren
42bd79a609 (crawler) Experimentally throttle the number of active retrievals to see how this affects the network performance
There's been some indications that request storms lead to buffer bloat and bad throughput.

This adds a configurable semaphore, by default permitting 100 active requests.
2025-03-22 11:50:37 +01:00
Viktor Lofgren
b91c1e528a (favicon) Send dummy svg result when image is missing
This prevents the browser from rendering a "broken image" in this scenario.
2025-03-21 15:15:14 +01:00
Viktor Lofgren
b1130d7a04 (domainstatedb) Allow creation of disconnected db
This is required for executor services that do not have crawl data to still be able to initialize.
2025-03-21 14:59:36 +01:00
Viktor Lofgren
8364bcdc97 (favicon) Add favicons to the matchograms 2025-03-21 14:30:40 +01:00
Viktor Lofgren
626cab5fab (favicon) Add favicon to site overview 2025-03-21 14:15:23 +01:00
Viktor Lofgren
cfd4712191 (favicon) Add capability for fetching favicons 2025-03-21 13:38:58 +01:00
Viktor Lofgren
9f18ced73d (crawler) Improve deferred task behavior 2025-03-18 12:54:18 +01:00
Viktor Lofgren
18e91269ab (crawler) Improve deferred task behavior 2025-03-18 12:25:22 +01:00
Viktor Lofgren
e315ca5758 (search) Change icon for small web filter
The previous icon was of an irregular size and shifted the layout in an unaesthetic way.
2025-03-17 12:07:34 +01:00
Viktor Lofgren
3ceea17c1d (search) Adjustments to devicd detection in CSS
Use pointer:fine media query to better distinguish between mobile devices and PCs with a window in portrait orientation.

With this, we never show mobile filtering functionality on mobile; and never show the touch-inaccessible minimized sidebar on mobile.
2025-03-17 12:04:34 +01:00
Viktor Lofgren
b34527c1a3 (search) Add small web filter for new UI 2025-03-17 11:39:19 +01:00
Viktor Lofgren
185bf28fca (crawler) Correct issue leading to parquet files not being correctly preconverted
Path.endsWith("str") != String.endsWith(".str")
2025-03-10 13:48:12 +01:00
Viktor Lofgren
78cc25584a (crawler) Add error logging when entering bad path for historical crawl data 2025-03-10 13:38:40 +01:00
Viktor Lofgren
62ba30bacf (common) Log info about metrics server 2025-03-10 13:12:39 +01:00
Viktor Lofgren
3bb84eb206 (common) Log info about metrics server 2025-03-10 13:03:48 +01:00
Viktor Lofgren
be7d13ccce (crawler) Correct task execution logic in crawler
The old behavior would flag domains as pending too soon, leading to them being omitted from execution if they were not immediately available to run.
2025-03-09 13:47:51 +01:00
Viktor Lofgren
8c088a7c0b (crawler) Remove custom thread factory
This was causing issues, and not really doing much of benefit.
2025-03-09 11:50:52 +01:00
Viktor Lofgren
ea9a642b9b (crawler) More effective task scheduling in the crawler
This should hopefully allow more threads to be busy
2025-03-09 11:44:59 +01:00
Viktor Lofgren
27f528af6a (search) Fix "Remove Javascript" toggle
A bug was introduced at some point where the special keyword for filtering on javascript was changed to special:scripts, from js:true/js:false.

Solves issue #155
2025-02-28 12:03:04 +01:00
Viktor Lofgren
20ca41ec95 (processed model) Use String columns instead of Txt columns for SlopDocumentRecord
It's very likely TxtStringColumn is the culprit of the bug seen in https://github.com/MarginaliaSearch/MarginaliaSearch/issues/154 where the wrong URL was shown for a search result.
2025-02-24 11:41:51 +01:00
Viktor Lofgren
7671f0d9e4 (search) Display message when no search results are found 2025-02-24 11:15:55 +01:00
Viktor Lofgren
44d6bc71b7 (assistant) Migrate to Jooby framework 2025-02-15 13:28:12 +01:00
Viktor Lofgren
9d302e2973 (assistant) Migrate to Jooby framework 2025-02-15 13:26:04 +01:00
Viktor Lofgren
f553701224 (assistant) Migrate to Jooby framework 2025-02-15 13:21:48 +01:00
Viktor Lofgren
f076d05595 (deps) Upgrade slf4j to latest 2025-02-15 12:50:16 +01:00
Viktor Lofgren
b513809710 (*) Stopgap fix for metrics server initialization errors bringing down services 2025-02-14 17:09:48 +01:00
Viktor Lofgren
7519b28e21 (search) Correct exception from misbehaving bots feeding invalid urls 2025-02-14 17:05:24 +01:00
Viktor Lofgren
3eac4dd57f (search) Correct exception in error handler when page is missing 2025-02-14 17:00:21 +01:00
Viktor Lofgren
4c2810720a (search) Add redirect handler for full URLs in the /site endpoint 2025-02-14 16:31:11 +01:00
Viktor Lofgren
8480ba8daa (live-capture) Code cleanup 2025-02-04 14:05:36 +01:00
Viktor Lofgren
fbba392491 (live-capture) Send a UA-string from the browserless fetcher as well
The change also introduces a somewhat convoluted wiremock test to intercept and verify that these headers are in fact sent
2025-02-04 13:36:49 +01:00
Viktor Lofgren
530eb35949 (update-rss) Do not fail the feed fetcher control actor if it takes a long time to complete. 2025-02-03 11:35:32 +01:00
Viktor Lofgren
c2dd2175a2 (search) Add new query expansion rule contracting WORD NUM pairs into WORD-NUM and WORDNUM 2025-02-01 13:13:30 +01:00
Viktor Lofgren
b8581b0f56 (crawler) Safe sanitization of headers during warc->slop conversion
The warc->slop converter was rejecting some items because they had headers that were representable in the Warc code's MessageHeader map implementation, but illegal in the HttpHeaders' implementation.

Fixing this by manually filtering these out.  Ostensibly the constructor has a filtering predicate, but this annoyingly runs too late and fails to prevent the problem.
2025-01-31 12:47:42 +01:00
Viktor Lofgren
2ea34767d8 (crawler) Use the response URL when resolving relative links
The crawler was incorrectly using the request URL as the base URL when resolving relative links.  This caused problems when encountering redirects.

 For example if we fetch /log, redirecting to  /log/ and find links to foo/, and bar/; these would resolve to /foo and /bar, and not /log/foo and /log/bar.
2025-01-31 12:40:13 +01:00
Viktor Lofgren
e9af838231 (actor) Fix migration actor final steps 2025-01-30 11:48:21 +01:00
Viktor Lofgren
ae0cad47c4 (actor) Utility method for getting a json prototype for actor states
If we can hook this into the control gui somehow, it'll make for a nice QOL upgrade when manually interacting with the actors.
2025-01-29 15:20:25 +01:00
Viktor Lofgren
5fbc8ef998 (misc) Tidying 2025-01-29 15:17:04 +01:00
Viktor Lofgren
32c6dd9e6a (actor) Delete old data in the migration actor 2025-01-29 14:51:46 +01:00
Viktor Lofgren
6ece6a6cfb (actor) Improve resilience for the migration actor 2025-01-29 14:43:09 +01:00
Viktor Lofgren
39cd1c18f8 Automatically run npm install tailwindcss@3 via setup.sh, as the new default version of the package is incompatible with the project 2025-01-29 12:21:08 +01:00
Viktor
eb65daaa88 Merge pull request #151 from Lionstiger/master
fix small grammar error in footerLegal.jte
2025-01-28 21:49:50 +01:00
Viktor
0bebdb6e33 Merge branch 'master' into master 2025-01-28 21:49:36 +01:00
Viktor Lofgren
1e50e392c6 (actor) Improve logging and error handling for data migration actor 2025-01-28 15:34:36 +01:00
Viktor Lofgren
fb673de370 (crawler) Change the header 'User-agent' to 'User-Agent' 2025-01-28 15:34:16 +01:00
Viktor Lofgren
eee73ab16c (crawler) Be more lenient when performing a domain probe 2025-01-28 15:24:30 +01:00
Viktor Lofgren
5354e034bf (search) Minor grammar fix 2025-01-27 18:36:31 +01:00
Magnus Wulf
72384ad6ca fix small grammar error 2025-01-27 15:04:57 +01:00
Viktor Lofgren
a2b076f9be (converter) Add progress tracking for big domains in converter 2025-01-26 18:03:59 +01:00
Viktor Lofgren
c8b0a32c0f (crawler) Reduce long retention of CrawlDataReference objects and their associated SerializableCrawlDataStreams 2025-01-26 15:40:17 +01:00
Viktor Lofgren
f0d74aa3bb (converter) Fix close() ordering to prevent converter crash 2025-01-26 14:47:36 +01:00
Viktor Lofgren
74a1f100f4 (converter) Refactor to remove CrawledDomainReader and move its functionality into SerializableCrawlDataStream 2025-01-26 14:46:50 +01:00
Viktor Lofgren
eb049658e4 (converter) Add truncation att the parser step to prevent the converter from spending too much time on excessively large documents
Refactor to do this without introducing additional copies
2025-01-26 14:28:53 +01:00
Viktor Lofgren
db138b2a6f (converter) Add truncation att the parser step to prevent the converter from spending too much time on exessively large documents 2025-01-26 14:25:57 +01:00
Viktor Lofgren
1673fc284c (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:21:46 +01:00
Viktor Lofgren
503ea57d5b (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:18:14 +01:00
Viktor Lofgren
18ca926c7f (converter) Truncate excessively long strings in SentenceExtractor, malformed data was effectively DOS:ing the converter 2025-01-26 12:52:54 +01:00
Viktor Lofgren
db99242db2 (converter) Adding some logging around the simple processing track to investigate an issue with the converter stalling 2025-01-26 12:02:00 +01:00
Viktor Lofgren
2b9d2985ba (doc) Update readme with up-to-date install instructions. 2025-01-24 18:51:41 +01:00
Viktor Lofgren
eeb6ecd711 (search) Make it clearer that the affiliate marker applies to the result, and not the search engine's relation to the result. 2025-01-24 18:50:00 +01:00
Viktor Lofgren
1f58aeadbf (build) Upgrade JIB 2025-01-24 18:49:28 +01:00
Viktor Lofgren
3d68be64da (crawler) Add default CT when it's missing for icons 2025-01-22 13:55:47 +01:00
Viktor Lofgren
668f3b16ef (search) Redirect ^/site/$ to /site 2025-01-22 13:35:18 +01:00
Viktor Lofgren
98a340a0d1 (crawler) Add favicon data to domain state db in its own table 2025-01-22 11:41:20 +01:00
Viktor Lofgren
8862100f7e (crawler) Improve logging and error handling 2025-01-21 21:44:21 +01:00
Viktor Lofgren
274941f6de (crawler) Smarter parquet->slop crawl data migration 2025-01-21 21:26:12 +01:00
Viktor Lofgren
abec83582d Fix refactoring gore 2025-01-21 15:08:04 +01:00
Viktor Lofgren
569520c9b6 (index) Add manual adjustments for rankings based on domain 2025-01-21 15:07:43 +01:00
Viktor Lofgren
088310e998 (converter) Improve simple processing performance
There was a regression introduced in the recent slop migration changes in  the performance of the simple conversion track.  This reverts the issue.
2025-01-21 14:13:33 +01:00
Viktor
270cab874b Merge pull request #134 from MarginaliaSearch/slop-crawl-data-spike
Store crawl data in slop instead of parquet
2025-01-21 13:34:22 +01:00
Viktor Lofgren
4c74e280d3 (crawler) Fix urlencoding in sitemap fetcher 2025-01-21 13:33:35 +01:00
Viktor Lofgren
5b347e17ac (crawler) Automatically migrate to slop from parquet when crawling 2025-01-21 13:33:14 +01:00
Viktor Lofgren
55d6ab933f Merge branch 'master' into slop-crawl-data-spike 2025-01-21 13:32:58 +01:00
Viktor Lofgren
43b74e9706 (crawler) Fix exception handler and resource leak in WarcRecorder 2025-01-20 23:45:28 +01:00
Viktor Lofgren
579a115243 (crawler) Reduce log spam from error handling in new sitemap fetcher 2025-01-20 23:17:13 +01:00
Viktor
2c67f50a43 Merge pull request #150 from MarginaliaSearch/httpclient-in-crawler
Reduce the use of 3rd party code in the crawler
2025-01-20 19:35:30 +01:00
Viktor Lofgren
78a958e2b0 (crawler) Fix broken test that started failing after the search engine moved to a new domain 2025-01-20 18:52:14 +01:00
Viktor Lofgren
4e939389b2 (crawler) New Jsoup based sitemap parser 2025-01-20 14:37:44 +01:00
Viktor Lofgren
e67a9bdb91 (crawler) Migrate away from using OkHttp in the crawler, use Java's HttpClient instead. 2025-01-19 15:07:11 +01:00
Viktor Lofgren
567e4e1237 (crawler) Fast detection and bail-out for crawler traps
Improve logging and exclude robots.txt from this logic.
2025-01-18 15:28:54 +01:00
Viktor Lofgren
4342e42722 (crawler) Fast detection and bail-out for crawler traps
Nephentes has been doing the rounds in social media, adding an easy detection and mitigation mechanism for this type of trap, as sadly not all webmasters set up their robots.txt correctly.  Out of the box crawl limits will also deal with this type of attack, but this fix is faster.
2025-01-17 13:02:57 +01:00
Viktor Lofgren
bc818056e6 (run) Fix templates for mariadb
Apparently the docker image contract changed at some point, and now we should spawn mariadbd and not mysqld; mariadb-admin and not mysqladmin.
2025-01-16 15:27:02 +01:00
Viktor Lofgren
de2feac238 (chore) Upgrade jib from 3.4.3 to 3.4.4 2025-01-16 15:10:45 +01:00
Viktor Lofgren
1e770205a5 (search) Dyslexia fix 2025-01-12 20:40:14 +01:00
Viktor
e44ecd6d69 Merge pull request #149 from MarginaliaSearch/vlofgren-patch-1
Update ROADMAP.md
2025-01-12 20:38:36 +01:00
Viktor
5b93a0e633 Update ROADMAP.md 2025-01-12 20:38:11 +01:00
Viktor
08fb0e5efe Update ROADMAP.md 2025-01-12 20:37:43 +01:00
Viktor
bcf67782ea Update ROADMAP.md 2025-01-12 20:37:09 +01:00
Viktor Lofgren
ef3f175ede (search) Don't clobber the search query URL with default values 2025-01-10 15:57:30 +01:00
Viktor Lofgren
bbe4b5d9fd Revert experimental changes 2025-01-10 15:52:02 +01:00
Viktor Lofgren
c67a635103 (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:44:44 +01:00
Viktor Lofgren
20b24133fb (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:34:48 +01:00
Viktor Lofgren
f2567677e8 (index-client) Clean up index client code
Improve error handling.  This should be a relatively rare case, but we don't want one bad index partition to blow up the entire query.
2025-01-10 15:17:07 +01:00
Viktor Lofgren
bc2c2061f2 (index-client) Clean up index client code
This should have the rpc stream reception be performed in parallel in separate threads, rather blocking sequentially in the main thread, hopefully giving a slight performance boost.
2025-01-10 15:14:42 +01:00
Viktor Lofgren
1c7f5a31a5 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:17:29 +01:00
Viktor Lofgren
59a8ea60f7 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:15:22 +01:00
Viktor Lofgren
aa9b1244ea (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:56:04 +01:00
Viktor Lofgren
2d17233366 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:53:56 +01:00
Viktor Lofgren
b245cc9f38 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:46:19 +01:00
Viktor Lofgren
6614d05bdf (db) Make db pool size configurable 2025-01-09 20:20:51 +01:00
Viktor Lofgren
55aeb03c4a (feeds) Replace rssreader based parsing with a custom jsoup based rss parser
This solves some issues with the rssreader based parser, which was very picky about the XML being valid.  Jsoup is much more lenient when parsing malformed XML.
2025-01-09 18:29:55 +01:00
Viktor Lofgren
faa589962f (live-capture) Browserless now requires a token 2025-01-09 14:51:11 +01:00
Viktor Lofgren
c7edd6b39f (live-capture) Browserless now requires a token 2025-01-09 14:46:05 +01:00
Viktor Lofgren
79da622e3b (search) Update front page with new banner about move 2025-01-08 21:38:19 +01:00
Viktor Lofgren
3da8337ba6 (feeds) Add system property for exporting fetched feeds to a slop table for debugging 2025-01-08 20:49:16 +01:00
Viktor Lofgren
a32d230f0a (special) Trigger deployment 2025-01-08 20:07:54 +01:00
Viktor Lofgren
3772bfd387 (query) Fix handling of optional ranking parameters 2025-01-08 17:11:22 +01:00
Viktor Lofgren
02a7900d1a (search) Correct search-in-title toggle in search UI 2025-01-08 16:51:10 +01:00
Viktor Lofgren
a1fb92468f (refac) Remove ResultRankingParameters, QueryLimits class and use protobuf classes directly instead
This is primarily to make the code a bit easier to reason about, and will reduce the level of indirection and data copying in the search-servi->query-service->index-service communication chain.
2025-01-08 16:15:57 +01:00
Viktor Lofgren
b7f0a2a98e (search-service) Fix metrics for errors and request times
This was previously in place, but broke during the jooby migration.
2025-01-08 14:10:43 +01:00
Viktor Lofgren
5fb76b2e79 (search-service) Fix metrics for errors and request times
This was previously in place, but broke during the jooby migration.
2025-01-08 14:06:03 +01:00
Viktor Lofgren
ad8c97f342 (search-service) Begin replacement of the crawl queue mechanism with node_affinity flagging
Previously a special db table was used to hold domains slated for crawling, but this is deprecated, and instead now each domain has a node_affinity flag that decides its indexing state, where a value of -1 indicates it shouldn't be crawled, a value of 0 means it's slated for crawling by the next index partition to be crawled, and a positive value means it's assigned to an index partition.

The change set also adds a test case validating the modified behavior.
2025-01-08 13:25:56 +01:00
Viktor Lofgren
dc1b6373eb (search-service) Clean up readme 2025-01-08 13:04:39 +01:00
Viktor Lofgren
983d6d067c (search-service) Add indexing indicator to sibling domains listing 2025-01-08 12:58:34 +01:00
Viktor Lofgren
a84a06975c (ranking-params) Add disable penalties flag to ranking params
This will help debugging ranking issues.  Later it may be added to some filters.
2025-01-08 00:16:49 +01:00
Viktor Lofgren
d2864c13ec (query-params) Add additional permitted query params 2025-01-07 20:21:44 +01:00
Viktor Lofgren
03ba53ce51 (legacy-search) Update nav bar with correct links 2025-01-07 17:44:52 +01:00
Viktor Lofgren
d4a6684931 (specialization) Soften length requirements for wiki-specialized documents (incl. cppreference) 2025-01-07 15:53:25 +01:00
Viktor Lofgren
47e58a21c6 Refactor documentBody method and ContentType charset handling
Updated the `documentBody` method to improve parsing retries and error handling. Refactored `ContentType` charset processing with cleaner logic, removing redundant handling for unsupported charsets. Also, updated the version of the `slop` library in dependency settings.
2024-12-17 17:11:37 +01:00
Viktor Lofgren
3714104976 Add loader for slop data in converter.
Also alter CrawledDocument to not require String parsing of the underlying byte[] data.  This should reduce the number of large memory allocations quite significantly, hopefully reducing the GC churn a bit.
2024-12-17 15:40:24 +01:00
Viktor Lofgren
f6f036b9b1 Switch to new Slop format for crawl data storage and processing.
Replaces Parquet output and processing with the new Slop-based format. Includes data migration functionality, updates to handling and writing of crawl data, and introduces support for SLOP in domain readers and converters.
2024-12-15 19:34:03 +01:00
Viktor Lofgren
b510b7feb8 Spike for storing crawl data in slop instead of parquet
This seems to reduce RAM overhead to 100s of MB (from ~2 GB), as well as roughly double the read speeds.  On disk size is virtually identical.
2024-12-15 15:49:47 +01:00
219 changed files with 5858 additions and 2731 deletions

View File

@@ -1,4 +1,4 @@
# Roadmap 2024-2025 # Roadmap 2025
This is a roadmap with major features planned for Marginalia Search. This is a roadmap with major features planned for Marginalia Search.
@@ -30,12 +30,6 @@ Retaining the ability to independently crawl the web is still strongly desirable
The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) ) The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) )
combined with naive bayesian filter would go a long way, or something more sophisticated...? combined with naive bayesian filter would go a long way, or something more sophisticated...?
## Web Design Overhaul
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
In progress: PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127) -- demo available at https://test.marginalia.nu/
## Additional Language Support ## Additional Language Support
It would be desirable if the search engine supported more languages than English. This is partially about It would be desirable if the search engine supported more languages than English. This is partially about
@@ -62,8 +56,31 @@ filter for any API consumer.
I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this. I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this.
## Show favicons next to search results
This is expected from search engines. Basic proof of concept sketch of fetching this data has been done, but the feature is some way from being reality.
## Specialized crawler for github
One of the search engine's biggest limitations right now is that it does not index github at all. A specialized crawler that fetches at least the readme.md would go a long way toward providing search capabilities in this domain.
# Completed # Completed
## Web Design Overhaul (COMPLETED 2025-01)
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)
## Proper Position Index (COMPLETED 2024-09) ## Proper Position Index (COMPLETED 2024-09)
The search engine uses a fixed width bit mask to indicate word positions. It has the benefit The search engine uses a fixed width bit mask to indicate word positions. It has the benefit
@@ -76,11 +93,3 @@ list, as is the civilized way of doing this.
Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99) Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)

View File

@@ -5,7 +5,7 @@ plugins {
// This is a workaround for a bug in the Jib plugin that causes it to stall randomly // This is a workaround for a bug in the Jib plugin that causes it to stall randomly
// https://github.com/GoogleContainerTools/jib/issues/3347 // https://github.com/GoogleContainerTools/jib/issues/3347
id 'com.google.cloud.tools.jib' version '3.4.3' apply(false) id 'com.google.cloud.tools.jib' version '3.4.4' apply(false)
} }
group 'marginalia' group 'marginalia'
@@ -43,12 +43,11 @@ subprojects.forEach {it ->
} }
ext { ext {
jvmVersion=23 jvmVersion = 24
dockerImageBase='container-registry.oracle.com/graalvm/jdk:23' dockerImageBase='container-registry.oracle.com/graalvm/jdk:24'
dockerImageTag='latest' dockerImageTag='latest'
dockerImageRegistry='marginalia' dockerImageRegistry='marginalia'
jibVersion = '3.4.3' jibVersion = '3.4.4'
} }
idea { idea {

View File

@@ -24,58 +24,4 @@ public class LanguageModels {
this.fasttextLanguageModel = fasttextLanguageModel; this.fasttextLanguageModel = fasttextLanguageModel;
this.segments = segments; this.segments = segments;
} }
public static LanguageModelsBuilder builder() {
return new LanguageModelsBuilder();
}
public static class LanguageModelsBuilder {
private Path termFrequencies;
private Path openNLPSentenceDetectionData;
private Path posRules;
private Path posDict;
private Path fasttextLanguageModel;
private Path segments;
LanguageModelsBuilder() {
}
public LanguageModelsBuilder termFrequencies(Path termFrequencies) {
this.termFrequencies = termFrequencies;
return this;
}
public LanguageModelsBuilder openNLPSentenceDetectionData(Path openNLPSentenceDetectionData) {
this.openNLPSentenceDetectionData = openNLPSentenceDetectionData;
return this;
}
public LanguageModelsBuilder posRules(Path posRules) {
this.posRules = posRules;
return this;
}
public LanguageModelsBuilder posDict(Path posDict) {
this.posDict = posDict;
return this;
}
public LanguageModelsBuilder fasttextLanguageModel(Path fasttextLanguageModel) {
this.fasttextLanguageModel = fasttextLanguageModel;
return this;
}
public LanguageModelsBuilder segments(Path segments) {
this.segments = segments;
return this;
}
public LanguageModels build() {
return new LanguageModels(this.termFrequencies, this.openNLPSentenceDetectionData, this.posRules, this.posDict, this.fasttextLanguageModel, this.segments);
}
public String toString() {
return "LanguageModels.LanguageModelsBuilder(termFrequencies=" + this.termFrequencies + ", openNLPSentenceDetectionData=" + this.openNLPSentenceDetectionData + ", posRules=" + this.posRules + ", posDict=" + this.posDict + ", fasttextLanguageModel=" + this.fasttextLanguageModel + ", segments=" + this.segments + ")";
}
}
} }

View File

@@ -20,7 +20,11 @@ public class DbDomainQueries {
private final HikariDataSource dataSource; private final HikariDataSource dataSource;
private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class); private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class);
private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build(); private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<EdgeDomain, DomainIdWithNode> domainWithNodeCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<Integer, EdgeDomain> domainNameCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<String, List<DomainWithNode>> siblingsCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
@Inject @Inject
public DbDomainQueries(HikariDataSource dataSource) public DbDomainQueries(HikariDataSource dataSource)
@@ -30,16 +34,21 @@ public class DbDomainQueries {
public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException { public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException {
try (var connection = dataSource.getConnection()) { try {
return domainIdCache.get(domain, () -> { return domainIdCache.get(domain, () -> {
try (var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) { try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString()); stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery(); var rsp = stmt.executeQuery();
if (rsp.next()) { if (rsp.next()) {
return rsp.getInt(1); return rsp.getInt(1);
} }
} }
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException(); throw new NoSuchElementException();
}); });
} }
@@ -49,8 +58,33 @@ public class DbDomainQueries {
catch (ExecutionException ex) { catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause()); throw new RuntimeException(ex.getCause());
} }
catch (SQLException ex) { }
throw new RuntimeException(ex);
public DomainIdWithNode getDomainIdWithNode(EdgeDomain domain) throws NoSuchElementException {
try {
return domainWithNodeCache.get(domain, () -> {
try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery();
if (rsp.next()) {
return new DomainIdWithNode(rsp.getInt(1), rsp.getInt(2));
}
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException();
});
}
catch (UncheckedExecutionException ex) {
throw new NoSuchElementException();
}
catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause());
} }
} }
@@ -84,46 +118,62 @@ public class DbDomainQueries {
} }
public Optional<EdgeDomain> getDomain(int id) { public Optional<EdgeDomain> getDomain(int id) {
try (var connection = dataSource.getConnection()) {
EdgeDomain existing = domainNameCache.getIfPresent(id);
if (existing != null) {
return Optional.of(existing);
}
try (var connection = dataSource.getConnection()) {
try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) { try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) {
stmt.setInt(1, id); stmt.setInt(1, id);
var rsp = stmt.executeQuery(); var rsp = stmt.executeQuery();
if (rsp.next()) { if (rsp.next()) {
return Optional.of(new EdgeDomain(rsp.getString(1))); var val = new EdgeDomain(rsp.getString(1));
domainNameCache.put(id, val);
return Optional.of(val);
} }
return Optional.empty(); return Optional.empty();
} }
} }
catch (UncheckedExecutionException ex) {
throw new RuntimeException(ex.getCause());
}
catch (SQLException ex) { catch (SQLException ex) {
throw new RuntimeException(ex); throw new RuntimeException(ex);
} }
} }
public List<EdgeDomain> otherSubdomains(EdgeDomain domain, int cnt) { public List<DomainWithNode> otherSubdomains(EdgeDomain domain, int cnt) throws ExecutionException {
List<EdgeDomain> ret = new ArrayList<>(); String topDomain = domain.topDomain;
try (var conn = dataSource.getConnection(); return siblingsCache.get(topDomain, () -> {
var stmt = conn.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) { List<DomainWithNode> ret = new ArrayList<>();
stmt.setString(1, domain.topDomain);
stmt.setInt(2, cnt);
var rs = stmt.executeQuery(); try (var conn = dataSource.getConnection();
while (rs.next()) { var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) {
var sibling = new EdgeDomain(rs.getString(1)); stmt.setString(1, topDomain);
stmt.setInt(2, cnt);
if (sibling.equals(domain)) var rs = stmt.executeQuery();
continue; while (rs.next()) {
var sibling = new EdgeDomain(rs.getString(1));
ret.add(sibling); if (sibling.equals(domain))
continue;
ret.add(new DomainWithNode(sibling, rs.getInt(2)));
}
} catch (SQLException e) {
logger.error("Failed to get domain neighbors");
} }
} catch (SQLException e) { return ret;
logger.error("Failed to get domain neighbors"); });
}
return ret;
} }
public record DomainWithNode (EdgeDomain domain, int nodeAffinity) {
public boolean isIndexed() {
return nodeAffinity > 0;
}
}
public record DomainIdWithNode (int domainId, int nodeAffinity) { }
} }

View File

@@ -1,118 +0,0 @@
package nu.marginalia.db;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.OptionalInt;
/** Class used in exporting data. This is intended to be used for a brief time
* and then discarded, not kept around as a service.
*/
public class DbDomainStatsExportMultitool implements AutoCloseable {
private final Connection connection;
private final int nodeId;
private final PreparedStatement knownUrlsQuery;
private final PreparedStatement visitedUrlsQuery;
private final PreparedStatement goodUrlsQuery;
private final PreparedStatement domainNameToId;
private final PreparedStatement allDomainsQuery;
private final PreparedStatement crawlQueueDomains;
private final PreparedStatement indexedDomainsQuery;
public DbDomainStatsExportMultitool(HikariDataSource dataSource, int nodeId) throws SQLException {
this.connection = dataSource.getConnection();
this.nodeId = nodeId;
knownUrlsQuery = connection.prepareStatement("""
SELECT KNOWN_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
visitedUrlsQuery = connection.prepareStatement("""
SELECT VISITED_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
goodUrlsQuery = connection.prepareStatement("""
SELECT GOOD_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
domainNameToId = connection.prepareStatement("""
SELECT ID
FROM EC_DOMAIN
WHERE DOMAIN_NAME=?
""");
allDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
""");
crawlQueueDomains = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM CRAWL_QUEUE
""");
indexedDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
WHERE INDEXED > 0
""");
}
public OptionalInt getVisitedUrls(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, visitedUrlsQuery);
}
public OptionalInt getDomainId(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, domainNameToId);
}
public List<String> getCrawlQueueDomains() throws SQLException {
return executeListQuery(crawlQueueDomains, 100);
}
public List<String> getAllIndexedDomains() throws SQLException {
return executeListQuery(indexedDomainsQuery, 100_000);
}
private OptionalInt executeNameToIntQuery(String domainName, PreparedStatement statement)
throws SQLException {
statement.setString(1, domainName);
var rs = statement.executeQuery();
if (rs.next()) {
return OptionalInt.of(rs.getInt(1));
}
return OptionalInt.empty();
}
private List<String> executeListQuery(PreparedStatement statement, int sizeHint) throws SQLException {
List<String> ret = new ArrayList<>(sizeHint);
var rs = statement.executeQuery();
while (rs.next()) {
ret.add(rs.getString(1));
}
return ret;
}
@Override
public void close() throws SQLException {
knownUrlsQuery.close();
goodUrlsQuery.close();
visitedUrlsQuery.close();
allDomainsQuery.close();
crawlQueueDomains.close();
domainNameToId.close();
connection.close();
}
}

View File

@@ -14,7 +14,7 @@ public class EdgeDomain implements Serializable {
@Nonnull @Nonnull
public final String topDomain; public final String topDomain;
public EdgeDomain(String host) { public EdgeDomain(@Nonnull String host) {
Objects.requireNonNull(host, "domain name must not be null"); Objects.requireNonNull(host, "domain name must not be null");
host = host.toLowerCase(); host = host.toLowerCase();
@@ -61,6 +61,10 @@ public class EdgeDomain implements Serializable {
this.topDomain = topDomain; this.topDomain = topDomain;
} }
public static String getTopDomain(String host) {
return new EdgeDomain(host).topDomain;
}
private boolean looksLikeGovTld(String host) { private boolean looksLikeGovTld(String host) {
if (host.length() < 8) if (host.length() < 8)
return false; return false;
@@ -116,24 +120,6 @@ public class EdgeDomain implements Serializable {
return topDomain.substring(0, cutPoint).toLowerCase(); return topDomain.substring(0, cutPoint).toLowerCase();
} }
public String getLongDomainKey() {
StringBuilder ret = new StringBuilder();
int cutPoint = topDomain.indexOf('.');
if (cutPoint < 0) {
ret.append(topDomain);
} else {
ret.append(topDomain, 0, cutPoint);
}
if (!subDomain.isEmpty() && !"www".equals(subDomain)) {
ret.append(":");
ret.append(subDomain);
}
return ret.toString().toLowerCase();
}
/** If possible, try to provide an alias domain, /** If possible, try to provide an alias domain,
* i.e. a domain name that is very likely to link to this one * i.e. a domain name that is very likely to link to this one
* */ * */

View File

@@ -83,6 +83,11 @@ public class QueryParams {
if (path.endsWith("StoryView.py")) { // folklore.org is neat if (path.endsWith("StoryView.py")) { // folklore.org is neat
return param.startsWith("project=") || param.startsWith("story="); return param.startsWith("project=") || param.startsWith("story=");
} }
// www.perseus.tufts.edu:
if (param.startsWith("collection=")) return true;
if (param.startsWith("doc=")) return true;
return false; return false;
} }
} }

View File

@@ -10,7 +10,9 @@ import java.nio.charset.StandardCharsets;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.util.*; import java.util.HashSet;
import java.util.Optional;
import java.util.Set;
import java.util.function.Function; import java.util.function.Function;
/** WorkLog is a journal of work done by a process, /** WorkLog is a journal of work done by a process,
@@ -61,6 +63,12 @@ public class WorkLog implements AutoCloseable, Closeable {
return new WorkLoadIterable<>(logFile, mapper); return new WorkLoadIterable<>(logFile, mapper);
} }
public static int countEntries(Path crawlerLog) throws IOException{
try (var linesStream = Files.lines(crawlerLog)) {
return (int) linesStream.filter(WorkLogEntry::isJobId).count();
}
}
// Use synchro over concurrent set to avoid competing writes // Use synchro over concurrent set to avoid competing writes
// - correct is better than fast here, it's sketchy enough to use // - correct is better than fast here, it's sketchy enough to use
// a PrintWriter // a PrintWriter

View File

@@ -89,7 +89,7 @@ public class DatabaseModule extends AbstractModule {
config.addDataSourceProperty("prepStmtCacheSize", "250"); config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048"); config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.setMaximumPoolSize(5); config.setMaximumPoolSize(Integer.getInteger("db.poolSize", 5));
config.setMinimumIdle(2); config.setMinimumIdle(2);
config.setMaxLifetime(Duration.ofMinutes(9).toMillis()); config.setMaxLifetime(Duration.ofMinutes(9).toMillis());

View File

@@ -6,6 +6,7 @@ import nu.marginalia.service.ServiceId;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.NetworkInterface; import java.net.NetworkInterface;
import java.util.Enumeration; import java.util.Enumeration;
@@ -115,11 +116,12 @@ public class ServiceConfigurationModule extends AbstractModule {
} }
} }
public static String getLocalNetworkIP() throws Exception { public static String getLocalNetworkIP() throws IOException {
Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces(); Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces();
while (nets.hasMoreElements()) { while (nets.hasMoreElements()) {
NetworkInterface netif = nets.nextElement(); NetworkInterface netif = nets.nextElement();
logger.info("Considering network interface {}: Up? {}, Loopback? {}", netif.getDisplayName(), netif.isUp(), netif.isLoopback());
if (!netif.isUp() || netif.isLoopback()) { if (!netif.isUp() || netif.isLoopback()) {
continue; continue;
} }
@@ -127,6 +129,7 @@ public class ServiceConfigurationModule extends AbstractModule {
Enumeration<InetAddress> inetAddresses = netif.getInetAddresses(); Enumeration<InetAddress> inetAddresses = netif.getInetAddresses();
while (inetAddresses.hasMoreElements()) { while (inetAddresses.hasMoreElements()) {
InetAddress addr = inetAddresses.nextElement(); InetAddress addr = inetAddresses.nextElement();
logger.info("Considering address {}: SiteLocal? {}, Loopback? {}", addr.getHostAddress(), addr.isSiteLocalAddress(), addr.isLoopbackAddress());
if (addr.isSiteLocalAddress() && !addr.isLoopbackAddress()) { if (addr.isSiteLocalAddress() && !addr.isLoopbackAddress()) {
return addr.getHostAddress(); return addr.getHostAddress();
} }

View File

@@ -15,6 +15,7 @@ import org.slf4j.LoggerFactory;
import org.slf4j.Marker; import org.slf4j.Marker;
import org.slf4j.MarkerFactory; import org.slf4j.MarkerFactory;
import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.nio.file.Paths; import java.nio.file.Paths;
import java.util.List; import java.util.List;
@@ -106,9 +107,12 @@ public class JoobyService {
config.externalAddress()); config.externalAddress());
// FIXME: This won't work outside of docker, may need to submit a PR to jooby to allow classpaths here // FIXME: This won't work outside of docker, may need to submit a PR to jooby to allow classpaths here
jooby.install(new JteModule(Path.of("/app/resources/jte"), Path.of("/app/classes/jte-precompiled"))); if (Files.exists(Path.of("/app/resources/jte")) || Files.exists(Path.of("/app/classes/jte-precompiled"))) {
jooby.assets("/*", Paths.get("/app/resources/static")); jooby.install(new JteModule(Path.of("/app/resources/jte"), Path.of("/app/classes/jte-precompiled")));
}
if (Files.exists(Path.of("/app/resources/static"))) {
jooby.assets("/*", Paths.get("/app/resources/static"));
}
var options = new ServerOptions(); var options = new ServerOptions();
options.setHost(config.bindAddress()); options.setHost(config.bindAddress());
options.setPort(restEndpoint.port()); options.setPort(restEndpoint.port());

View File

@@ -6,25 +6,36 @@ import nu.marginalia.service.module.ServiceConfiguration;
import org.eclipse.jetty.server.Server; import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder; import org.eclipse.jetty.servlet.ServletHolder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.InetSocketAddress; import java.net.InetSocketAddress;
public class MetricsServer { public class MetricsServer {
private static final Logger logger = LoggerFactory.getLogger(MetricsServer.class);
@Inject @Inject
public MetricsServer(ServiceConfiguration configuration) throws Exception { public MetricsServer(ServiceConfiguration configuration) {
// If less than zero, we forego setting up a metrics server // If less than zero, we forego setting up a metrics server
if (configuration.metricsPort() < 0) if (configuration.metricsPort() < 0)
return; return;
Server server = new Server(new InetSocketAddress(configuration.bindAddress(), configuration.metricsPort())); try {
Server server = new Server(new InetSocketAddress(configuration.bindAddress(), configuration.metricsPort()));
ServletContextHandler context = new ServletContextHandler(); ServletContextHandler context = new ServletContextHandler();
context.setContextPath("/"); context.setContextPath("/");
server.setHandler(context); server.setHandler(context);
context.addServlet(new ServletHolder(new MetricsServlet()), "/metrics"); context.addServlet(new ServletHolder(new MetricsServlet()), "/metrics");
server.start(); logger.info("MetricsServer listening on {}:{}", configuration.bindAddress(), configuration.metricsPort());
server.start();
}
catch (Exception|NoSuchMethodError ex) {
logger.error("Failed to set up metrics server", ex);
}
} }
} }

View File

@@ -35,21 +35,8 @@ public class RateLimiter {
} }
public static RateLimiter forExpensiveRequest() {
return new RateLimiter(5, 10);
}
public static RateLimiter custom(int perMinute) { public static RateLimiter custom(int perMinute) {
return new RateLimiter(perMinute, 60); return new RateLimiter(4 * perMinute, perMinute);
}
public static RateLimiter forSpamBots() {
return new RateLimiter(120, 3600);
}
public static RateLimiter forLogin() {
return new RateLimiter(3, 15);
} }
private void cleanIdleBuckets() { private void cleanIdleBuckets() {
@@ -62,7 +49,7 @@ public class RateLimiter {
} }
private Bucket createBucket() { private Bucket createBucket() {
var refill = Refill.greedy(1, Duration.ofSeconds(refillRate)); var refill = Refill.greedy(refillRate, Duration.ofSeconds(60));
var bw = Bandwidth.classic(capacity, refill); var bw = Bandwidth.classic(capacity, refill);
return Bucket.builder().addLimit(bw).build(); return Bucket.builder().addLimit(bw).build();
} }

View File

@@ -5,6 +5,7 @@
<Filters> <Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters> </Filters>
</Console> </Console>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz" <RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
@@ -13,9 +14,20 @@
<Filters> <Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters> </Filters>
<SizeBasedTriggeringPolicy size="10MB" /> <SizeBasedTriggeringPolicy size="10MB" />
</RollingFile> </RollingFile>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/crawler-audit-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/crawler-audit-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
ignoreExceptions="false">
<PatternLayout>
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS}: %msg{nolookups}%n</Pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100MB" />
<Filters>
<MarkerFilter marker="CRAWLER" onMatch="ALLOW" onMismatch="DENY" />
</Filters>
</RollingFile>
</Appenders> </Appenders>
<Loggers> <Loggers>
<Logger name="org.apache.zookeeper" level="WARN" /> <Logger name="org.apache.zookeeper" level="WARN" />

View File

@@ -5,6 +5,7 @@
<Filters> <Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters> </Filters>
</Console> </Console>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz" <RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
@@ -17,6 +18,17 @@
<MarkerFilter marker="PROCESS" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="PROCESS" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" /> <MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters>
</RollingFile>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/crawler-audit-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/crawler-audit-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
ignoreExceptions="false">
<PatternLayout>
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS}: %msg{nolookups}%n</Pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100MB" />
<Filters>
<MarkerFilter marker="CRAWLER" onMatch="ALLOW" onMismatch="DENY" />
</Filters> </Filters>
</RollingFile> </RollingFile>
</Appenders> </Appenders>

View File

@@ -25,7 +25,7 @@ import static org.mockito.Mockito.when;
class ZkServiceRegistryTest { class ZkServiceRegistryTest {
private static final int ZOOKEEPER_PORT = 2181; private static final int ZOOKEEPER_PORT = 2181;
private static final GenericContainer<?> zookeeper = private static final GenericContainer<?> zookeeper =
new GenericContainer<>("zookeeper:3.8.0") new GenericContainer<>("zookeeper:3.8")
.withExposedPorts(ZOOKEEPER_PORT); .withExposedPorts(ZOOKEEPER_PORT);
List<ZkServiceRegistry> registries = new ArrayList<>(); List<ZkServiceRegistry> registries = new ArrayList<>();

View File

@@ -20,6 +20,7 @@ public enum ExecutorActor {
EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
MIGRATE_CRAWL_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD), PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),
PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD), PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),

View File

@@ -66,6 +66,7 @@ public class ExecutorActorControlService {
DownloadSampleActor downloadSampleActor, DownloadSampleActor downloadSampleActor,
ScrapeFeedsActor scrapeFeedsActor, ScrapeFeedsActor scrapeFeedsActor,
ExecutorActorStateMachines stateMachines, ExecutorActorStateMachines stateMachines,
MigrateCrawlDataActor migrateCrawlDataActor,
ExportAllPrecessionActor exportAllPrecessionActor, ExportAllPrecessionActor exportAllPrecessionActor,
UpdateRssActor updateRssActor) throws SQLException { UpdateRssActor updateRssActor) throws SQLException {
this.messageQueueFactory = messageQueueFactory; this.messageQueueFactory = messageQueueFactory;
@@ -107,6 +108,8 @@ public class ExecutorActorControlService {
register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor); register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor);
register(ExecutorActor.UPDATE_RSS, updateRssActor); register(ExecutorActor.UPDATE_RSS, updateRssActor);
register(ExecutorActor.MIGRATE_CRAWL_DATA, migrateCrawlDataActor);
if (serviceConfiguration.node() == 1) { if (serviceConfiguration.node() == 1) {
register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor); register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor);
} }

View File

@@ -14,6 +14,8 @@ import nu.marginalia.mq.persistence.MqPersistence;
import nu.marginalia.nodecfg.NodeConfigurationService; import nu.marginalia.nodecfg.NodeConfigurationService;
import nu.marginalia.nodecfg.model.NodeProfile; import nu.marginalia.nodecfg.model.NodeProfile;
import nu.marginalia.service.module.ServiceConfiguration; import nu.marginalia.service.module.ServiceConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration; import java.time.Duration;
import java.time.LocalDateTime; import java.time.LocalDateTime;
@@ -29,6 +31,7 @@ public class UpdateRssActor extends RecordActorPrototype {
private final NodeConfigurationService nodeConfigurationService; private final NodeConfigurationService nodeConfigurationService;
private final MqPersistence persistence; private final MqPersistence persistence;
private static final Logger logger = LoggerFactory.getLogger(UpdateRssActor.class);
@Inject @Inject
public UpdateRssActor(Gson gson, public UpdateRssActor(Gson gson,
@@ -101,8 +104,8 @@ public class UpdateRssActor extends RecordActorPrototype {
case UpdateRefresh(int count, long msgId) -> { case UpdateRefresh(int count, long msgId) -> {
MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12)); MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12));
if (msg == null) { if (msg == null) {
// Retry the update logger.warn("UpdateRefresh is taking a very long time");
yield new Error("Failed to update feeds: message not found"); yield new UpdateRefresh(count, msgId);
} else if (msg.state() != MqMessageState.OK) { } else if (msg.state() != MqMessageState.OK) {
// Retry the update // Retry the update
yield new Error("Failed to update feeds: " + msg.state()); yield new Error("Failed to update feeds: " + msg.state());
@@ -119,8 +122,8 @@ public class UpdateRssActor extends RecordActorPrototype {
case UpdateClean(long msgId) -> { case UpdateClean(long msgId) -> {
MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12)); MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12));
if (msg == null) { if (msg == null) {
// Retry the update logger.warn("UpdateClean is taking a very long time");
yield new Error("Failed to update feeds: message not found"); yield new UpdateClean(msgId);
} else if (msg.state() != MqMessageState.OK) { } else if (msg.state() != MqMessageState.OK) {
// Retry the update // Retry the update
yield new Error("Failed to update feeds: " + msg.state()); yield new Error("Failed to update feeds: " + msg.state());

View File

@@ -0,0 +1,150 @@
package nu.marginalia.actor.task;
import com.google.gson.Gson;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import nu.marginalia.actor.prototype.RecordActorPrototype;
import nu.marginalia.actor.state.ActorStep;
import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.process.log.WorkLog;
import nu.marginalia.process.log.WorkLogEntry;
import nu.marginalia.service.control.ServiceHeartbeat;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId;
import org.apache.logging.log4j.util.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.util.Map;
import java.util.Optional;
import java.util.function.Function;
@Singleton
public class MigrateCrawlDataActor extends RecordActorPrototype {
private final FileStorageService fileStorageService;
private final ServiceHeartbeat serviceHeartbeat;
private static final Logger logger = LoggerFactory.getLogger(MigrateCrawlDataActor.class);
@Inject
public MigrateCrawlDataActor(Gson gson, FileStorageService fileStorageService, ServiceHeartbeat serviceHeartbeat) {
super(gson);
this.fileStorageService = fileStorageService;
this.serviceHeartbeat = serviceHeartbeat;
}
public record Run(long fileStorageId) implements ActorStep {}
@Override
public ActorStep transition(ActorStep self) throws Exception {
return switch (self) {
case Run(long fileStorageId) -> {
FileStorage storage = fileStorageService.getStorage(FileStorageId.of(fileStorageId));
Path root = storage.asPath();
Path crawlerLog = root.resolve("crawler.log");
Path newCrawlerLog = Files.createTempFile(root, "crawler", ".migrate.log");
int totalEntries = WorkLog.countEntries(crawlerLog);
try (WorkLog workLog = new WorkLog(newCrawlerLog);
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Migrating")
) {
int entryIdx = 0;
for (Map.Entry<WorkLogEntry, Path> item : WorkLog.iterableMap(crawlerLog, new CrawlDataLocator(root))) {
final WorkLogEntry entry = item.getKey();
final Path inputPath = item.getValue();
Path outputPath = inputPath;
heartbeat.progress("Migrating" + inputPath.getFileName(), entryIdx++, totalEntries);
if (inputPath.toString().endsWith(".parquet")) {
String domain = entry.id();
String id = Integer.toHexString(domain.hashCode());
outputPath = CrawlerOutputFile.createSlopPath(root, id, domain);
if (Files.exists(inputPath)) {
try {
SlopCrawlDataRecord.convertFromParquet(inputPath, outputPath);
Files.deleteIfExists(inputPath);
} catch (Exception ex) {
outputPath = inputPath; // don't update the work log on error
logger.error("Failed to convert " + inputPath, ex);
}
}
else if (!Files.exists(inputPath) && !Files.exists(outputPath)) {
// if the input file is missing, and the output file is missing, we just write the log
// record identical to the old one
outputPath = inputPath;
}
}
// Write a log entry for the (possibly) converted file
workLog.setJobToFinished(entry.id(), outputPath.toString(), entry.cnt());
}
}
Path oldCrawlerLog = Files.createTempFile(root, "crawler-", ".migrate.old.log");
Files.move(crawlerLog, oldCrawlerLog, StandardCopyOption.REPLACE_EXISTING);
Files.move(newCrawlerLog, crawlerLog);
yield new End();
}
default -> new Error();
};
}
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Map.Entry<WorkLogEntry, Path>>> {
private final Path crawlRootDir;
CrawlDataLocator(Path crawlRootDir) {
this.crawlRootDir = crawlRootDir;
}
@Override
public Optional<Map.Entry<WorkLogEntry, Path>> apply(WorkLogEntry entry) {
var path = getCrawledFilePath(crawlRootDir, entry.path());
if (!Files.exists(path)) {
return Optional.empty();
}
try {
return Optional.of(Map.entry(entry, path));
}
catch (Exception ex) {
return Optional.empty();
}
}
private Path getCrawledFilePath(Path crawlDir, String fileName) {
int sp = fileName.lastIndexOf('/');
// Normalize the filename
if (sp >= 0 && sp + 1< fileName.length())
fileName = fileName.substring(sp + 1);
if (fileName.length() < 4)
fileName = Strings.repeat("0", 4 - fileName.length()) + fileName;
String sp1 = fileName.substring(0, 2);
String sp2 = fileName.substring(2, 4);
return crawlDir.resolve(sp1).resolve(sp2).resolve(fileName);
}
}
@Override
public String describe() {
return "Migrates crawl data to the latest format";
}
}

View File

@@ -0,0 +1,47 @@
plugins {
id 'java'
id "com.google.protobuf" version "0.9.4"
id 'jvm-test-suite'
}
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(rootProject.ext.jvmVersion))
}
}
jar.archiveBaseName = 'favicon-api'
apply from: "$rootProject.projectDir/protobuf.gradle"
apply from: "$rootProject.projectDir/srcsets.gradle"
dependencies {
implementation project(':code:common:model')
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.notnull
implementation libs.guava
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation libs.gson
implementation libs.bundles.protobuf
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
exclude group: 'com.google.guava'
}
}
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
}

View File

@@ -0,0 +1,39 @@
package nu.marginalia.api.favicon;
import com.google.inject.Inject;
import nu.marginalia.service.client.GrpcChannelPoolFactory;
import nu.marginalia.service.client.GrpcMultiNodeChannelPool;
import nu.marginalia.service.discovery.property.ServiceKey;
import nu.marginalia.service.discovery.property.ServicePartition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Optional;
public class FaviconClient {
private static final Logger logger = LoggerFactory.getLogger(FaviconClient.class);
private final GrpcMultiNodeChannelPool<FaviconAPIGrpc.FaviconAPIBlockingStub> channelPool;
@Inject
public FaviconClient(GrpcChannelPoolFactory factory) {
this.channelPool = factory.createMulti(
ServiceKey.forGrpcApi(FaviconAPIGrpc.class, ServicePartition.multi()),
FaviconAPIGrpc::newBlockingStub);
}
public record FaviconData(byte[] bytes, String contentType) {}
public Optional<FaviconData> getFavicon(String domain, int node) {
RpcFaviconResponse rsp = channelPool.call(FaviconAPIGrpc.FaviconAPIBlockingStub::getFavicon)
.forNode(node)
.run(RpcFaviconRequest.newBuilder().setDomain(domain).build());
if (rsp.getData().isEmpty())
return Optional.empty();
return Optional.of(new FaviconData(rsp.getData().toByteArray(), rsp.getContentType()));
}
}

View File

@@ -0,0 +1,20 @@
syntax="proto3";
package marginalia.api.favicon;
option java_package="nu.marginalia.api.favicon";
option java_multiple_files=true;
service FaviconAPI {
/** Fetches information about a domain. */
rpc getFavicon(RpcFaviconRequest) returns (RpcFaviconResponse) {}
}
message RpcFaviconRequest {
string domain = 1;
}
message RpcFaviconResponse {
string domain = 1;
bytes data = 2;
string contentType = 3;
}

View File

@@ -0,0 +1,49 @@
plugins {
id 'java'
id 'application'
id 'jvm-test-suite'
}
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(rootProject.ext.jvmVersion))
}
}
apply from: "$rootProject.projectDir/srcsets.gradle"
dependencies {
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation project(':code:common:model')
implementation project(':code:common:db')
implementation project(':code:functions:favicon:api')
implementation project(':code:processes:crawling-process')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
exclude group: 'com.google.guava'
}
}
implementation libs.notnull
implementation libs.guava
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation dependencies.create(libs.spark.get()) {
exclude group: 'org.eclipse.jetty'
}
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
}

View File

@@ -0,0 +1,48 @@
package nu.marginalia.functions.favicon;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import com.google.protobuf.ByteString;
import io.grpc.stub.StreamObserver;
import nu.marginalia.api.favicon.FaviconAPIGrpc;
import nu.marginalia.api.favicon.RpcFaviconRequest;
import nu.marginalia.api.favicon.RpcFaviconResponse;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.service.server.DiscoverableService;
import java.util.Optional;
@Singleton
public class FaviconGrpcService extends FaviconAPIGrpc.FaviconAPIImplBase implements DiscoverableService {
private final DomainStateDb domainStateDb;
@Inject
public FaviconGrpcService(DomainStateDb domainStateDb) {
this.domainStateDb = domainStateDb;
}
public boolean shouldRegisterService() {
return domainStateDb.isAvailable();
}
@Override
public void getFavicon(RpcFaviconRequest request, StreamObserver<RpcFaviconResponse> responseObserver) {
Optional<DomainStateDb.FaviconRecord> icon = domainStateDb.getIcon(request.getDomain());
RpcFaviconResponse response;
if (icon.isEmpty()) {
response = RpcFaviconResponse.newBuilder().build();
}
else {
var iconRecord = icon.get();
response = RpcFaviconResponse.newBuilder()
.setContentType(iconRecord.contentType())
.setDomain(request.getDomain())
.setData(ByteString.copyFrom(iconRecord.imageData()))
.build();
}
responseObserver.onNext(response);
responseObserver.onCompleted();
}
}

View File

@@ -29,10 +29,12 @@ dependencies {
implementation libs.jsoup implementation libs.jsoup
implementation project(':third-party:rssreader') implementation project(':third-party:rssreader')
implementation libs.opencsv implementation libs.opencsv
implementation libs.slop
implementation libs.sqlite implementation libs.sqlite
implementation libs.bundles.slf4j implementation libs.bundles.slf4j
implementation libs.commons.lang3 implementation libs.commons.lang3
implementation libs.commons.io implementation libs.commons.io
implementation libs.wiremock
implementation libs.prometheus implementation libs.prometheus
implementation libs.guava implementation libs.guava

View File

@@ -1,6 +1,7 @@
package nu.marginalia.livecapture; package nu.marginalia.livecapture;
import com.google.gson.Gson; import com.google.gson.Gson;
import nu.marginalia.WmsaHome;
import nu.marginalia.model.gson.GsonFactory; import nu.marginalia.model.gson.GsonFactory;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -12,10 +13,13 @@ import java.net.http.HttpRequest;
import java.net.http.HttpResponse; import java.net.http.HttpResponse;
import java.time.Duration; import java.time.Duration;
import java.util.Map; import java.util.Map;
import java.util.Optional;
/** Client for local browserless.io API */ /** Client for local browserless.io API */
public class BrowserlessClient implements AutoCloseable { public class BrowserlessClient implements AutoCloseable {
private static final Logger logger = LoggerFactory.getLogger(BrowserlessClient.class); private static final Logger logger = LoggerFactory.getLogger(BrowserlessClient.class);
private static final String BROWSERLESS_TOKEN = System.getProperty("live-capture.browserless-token", "BROWSERLESS_TOKEN");
private final HttpClient httpClient = HttpClient.newBuilder() private final HttpClient httpClient = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_1_1) .version(HttpClient.Version.HTTP_1_1)
@@ -25,18 +29,21 @@ public class BrowserlessClient implements AutoCloseable {
private final URI browserlessURI; private final URI browserlessURI;
private final Gson gson = GsonFactory.get(); private final Gson gson = GsonFactory.get();
private final String userAgent = WmsaHome.getUserAgent().uaString();
public BrowserlessClient(URI browserlessURI) { public BrowserlessClient(URI browserlessURI) {
this.browserlessURI = browserlessURI; this.browserlessURI = browserlessURI;
} }
public String content(String url, GotoOptions gotoOptions) throws IOException, InterruptedException { public Optional<String> content(String url, GotoOptions gotoOptions) throws IOException, InterruptedException {
Map<String, Object> requestData = Map.of( Map<String, Object> requestData = Map.of(
"url", url, "url", url,
"userAgent", userAgent,
"gotoOptions", gotoOptions "gotoOptions", gotoOptions
); );
var request = HttpRequest.newBuilder() var request = HttpRequest.newBuilder()
.uri(browserlessURI.resolve("/content")) .uri(browserlessURI.resolve("/content?token="+BROWSERLESS_TOKEN))
.method("POST", HttpRequest.BodyPublishers.ofString( .method("POST", HttpRequest.BodyPublishers.ofString(
gson.toJson(requestData) gson.toJson(requestData)
)) ))
@@ -47,10 +54,10 @@ public class BrowserlessClient implements AutoCloseable {
if (rsp.statusCode() >= 300) { if (rsp.statusCode() >= 300) {
logger.info("Failed to fetch content for {}, status {}", url, rsp.statusCode()); logger.info("Failed to fetch content for {}, status {}", url, rsp.statusCode());
return null; return Optional.empty();
} }
return rsp.body(); return Optional.of(rsp.body());
} }
public byte[] screenshot(String url, GotoOptions gotoOptions, ScreenshotOptions screenshotOptions) public byte[] screenshot(String url, GotoOptions gotoOptions, ScreenshotOptions screenshotOptions)
@@ -58,12 +65,13 @@ public class BrowserlessClient implements AutoCloseable {
Map<String, Object> requestData = Map.of( Map<String, Object> requestData = Map.of(
"url", url, "url", url,
"userAgent", userAgent,
"options", screenshotOptions, "options", screenshotOptions,
"gotoOptions", gotoOptions "gotoOptions", gotoOptions
); );
var request = HttpRequest.newBuilder() var request = HttpRequest.newBuilder()
.uri(browserlessURI.resolve("/screenshot")) .uri(browserlessURI.resolve("/screenshot?token="+BROWSERLESS_TOKEN))
.method("POST", HttpRequest.BodyPublishers.ofString( .method("POST", HttpRequest.BodyPublishers.ofString(
gson.toJson(requestData) gson.toJson(requestData)
)) ))
@@ -82,7 +90,7 @@ public class BrowserlessClient implements AutoCloseable {
} }
@Override @Override
public void close() throws Exception { public void close() {
httpClient.shutdownNow(); httpClient.shutdownNow();
} }

View File

@@ -1,6 +1,6 @@
package nu.marginalia.rss.model; package nu.marginalia.rss.model;
import com.apptasticsoftware.rssreader.Item; import nu.marginalia.rss.svc.SimpleFeedParser;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.NotNull;
import org.jsoup.Jsoup; import org.jsoup.Jsoup;
@@ -18,37 +18,33 @@ public record FeedItem(String title,
public static final int MAX_DESC_LENGTH = 255; public static final int MAX_DESC_LENGTH = 255;
public static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ"); public static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
public static FeedItem fromItem(Item item, boolean keepFragment) { public static FeedItem fromItem(SimpleFeedParser.ItemData item, boolean keepFragment) {
String title = item.getTitle().orElse(""); String title = item.title();
String date = getItemDate(item); String date = getItemDate(item);
String description = getItemDescription(item); String description = getItemDescription(item);
String url; String url;
if (keepFragment || item.getLink().isEmpty()) { if (keepFragment) {
url = item.getLink().orElse(""); url = item.url();
} }
else { else {
try { try {
String link = item.getLink().get(); String link = item.url();
var linkUri = new URI(link); var linkUri = new URI(link);
var cleanUri = new URI(linkUri.getScheme(), linkUri.getAuthority(), linkUri.getPath(), linkUri.getQuery(), null); var cleanUri = new URI(linkUri.getScheme(), linkUri.getAuthority(), linkUri.getPath(), linkUri.getQuery(), null);
url = cleanUri.toString(); url = cleanUri.toString();
} }
catch (Exception e) { catch (Exception e) {
// fallback to original link if we can't clean it, this is not a very important step // fallback to original link if we can't clean it, this is not a very important step
url = item.getLink().get(); url = item.url();
} }
} }
return new FeedItem(title, date, description, url); return new FeedItem(title, date, description, url);
} }
private static String getItemDescription(Item item) { private static String getItemDescription(SimpleFeedParser.ItemData item) {
Optional<String> description = item.getDescription(); String rawDescription = item.description();
if (description.isEmpty())
return "";
String rawDescription = description.get();
if (rawDescription.indexOf('<') >= 0) { if (rawDescription.indexOf('<') >= 0) {
rawDescription = Jsoup.parseBodyFragment(rawDescription).text(); rawDescription = Jsoup.parseBodyFragment(rawDescription).text();
} }
@@ -58,15 +54,18 @@ public record FeedItem(String title,
// e.g. http://fabiensanglard.net/rss.xml does dates like this: 1 Apr 2021 00:00:00 +0000 // e.g. http://fabiensanglard.net/rss.xml does dates like this: 1 Apr 2021 00:00:00 +0000
private static final DateTimeFormatter extraFormatter = DateTimeFormatter.ofPattern("d MMM yyyy HH:mm:ss Z"); private static final DateTimeFormatter extraFormatter = DateTimeFormatter.ofPattern("d MMM yyyy HH:mm:ss Z");
private static String getItemDate(Item item) { private static String getItemDate(SimpleFeedParser.ItemData item) {
Optional<ZonedDateTime> zonedDateTime = Optional.empty(); Optional<ZonedDateTime> zonedDateTime = Optional.empty();
try { try {
zonedDateTime = item.getPubDateZonedDateTime(); zonedDateTime = item.getPubDateZonedDateTime();
} }
catch (Exception e) { catch (Exception e) {
zonedDateTime = item.getPubDate() try {
.map(extraFormatter::parse) zonedDateTime = Optional.of(ZonedDateTime.from(extraFormatter.parse(item.pubDate())));
.map(ZonedDateTime::from); }
catch (Exception e2) {
// ignore
}
} }
return zonedDateTime.map(date -> date.format(DATE_FORMAT)).orElse(""); return zonedDateTime.map(date -> date.format(DATE_FORMAT)).orElse("");

View File

@@ -1,7 +1,5 @@
package nu.marginalia.rss.svc; package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.Item;
import com.apptasticsoftware.rssreader.RssReader;
import com.google.inject.Inject; import com.google.inject.Inject;
import com.opencsv.CSVReader; import com.opencsv.CSVReader;
import nu.marginalia.WmsaHome; import nu.marginalia.WmsaHome;
@@ -20,7 +18,6 @@ import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage; import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageType; import nu.marginalia.storage.model.FileStorageType;
import nu.marginalia.util.SimpleBlockingThreadPool; import nu.marginalia.util.SimpleBlockingThreadPool;
import org.apache.commons.io.input.BOMInputStream;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -32,11 +29,11 @@ import java.net.URISyntaxException;
import java.net.http.HttpClient; import java.net.http.HttpClient;
import java.net.http.HttpRequest; import java.net.http.HttpRequest;
import java.net.http.HttpResponse; import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.*; import java.time.*;
import java.time.format.DateTimeFormatter; import java.time.format.DateTimeFormatter;
import java.util.*; import java.util.*;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors; import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicInteger;
@@ -48,8 +45,6 @@ public class FeedFetcherService {
private static final int MAX_FEED_ITEMS = 10; private static final int MAX_FEED_ITEMS = 10;
private static final Logger logger = LoggerFactory.getLogger(FeedFetcherService.class); private static final Logger logger = LoggerFactory.getLogger(FeedFetcherService.class);
private final RssReader rssReader = new RssReader();
private final FeedDb feedDb; private final FeedDb feedDb;
private final FileStorageService fileStorageService; private final FileStorageService fileStorageService;
private final NodeConfigurationService nodeConfigurationService; private final NodeConfigurationService nodeConfigurationService;
@@ -72,23 +67,12 @@ public class FeedFetcherService {
this.nodeConfigurationService = nodeConfigurationService; this.nodeConfigurationService = nodeConfigurationService;
this.serviceHeartbeat = serviceHeartbeat; this.serviceHeartbeat = serviceHeartbeat;
this.executorClient = executorClient; this.executorClient = executorClient;
// Add support for some alternate date tags for atom
rssReader.addItemExtension("issued", this::setDateFallback);
rssReader.addItemExtension("created", this::setDateFallback);
}
private void setDateFallback(Item item, String value) {
if (item.getPubDate().isEmpty()) {
item.setPubDate(value);
}
} }
public enum UpdateMode { public enum UpdateMode {
CLEAN, CLEAN,
REFRESH REFRESH
}; }
public void updateFeeds(UpdateMode updateMode) throws IOException { public void updateFeeds(UpdateMode updateMode) throws IOException {
if (updating) // Prevent concurrent updates if (updating) // Prevent concurrent updates
@@ -96,6 +80,7 @@ public class FeedFetcherService {
throw new IllegalStateException("Already updating feeds, refusing to start another update"); throw new IllegalStateException("Already updating feeds, refusing to start another update");
} }
try (FeedDbWriter writer = feedDb.createWriter(); try (FeedDbWriter writer = feedDb.createWriter();
HttpClient client = HttpClient.newBuilder() HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(15)) .connectTimeout(Duration.ofSeconds(15))
@@ -103,6 +88,8 @@ public class FeedFetcherService {
.followRedirects(HttpClient.Redirect.NORMAL) .followRedirects(HttpClient.Redirect.NORMAL)
.version(HttpClient.Version.HTTP_2) .version(HttpClient.Version.HTTP_2)
.build(); .build();
ExecutorService fetchExecutor = Executors.newCachedThreadPool();
FeedJournal feedJournal = FeedJournal.create();
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Update Rss Feeds") var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Update Rss Feeds")
) { ) {
updating = true; updating = true;
@@ -146,7 +133,7 @@ public class FeedFetcherService {
FetchResult feedData; FetchResult feedData;
try (DomainLocks.DomainLock domainLock = domainLocks.lockDomain(new EdgeDomain(feed.domain()))) { try (DomainLocks.DomainLock domainLock = domainLocks.lockDomain(new EdgeDomain(feed.domain()))) {
feedData = fetchFeedData(feed, client, ifModifiedSinceDate, ifNoneMatchTag); feedData = fetchFeedData(feed, client, fetchExecutor, ifModifiedSinceDate, ifNoneMatchTag);
} catch (Exception ex) { } catch (Exception ex) {
feedData = new FetchResult.TransientError(); feedData = new FetchResult.TransientError();
} }
@@ -155,6 +142,8 @@ public class FeedFetcherService {
case FetchResult.Success(String value, String etag) -> { case FetchResult.Success(String value, String etag) -> {
writer.saveEtag(feed.domain(), etag); writer.saveEtag(feed.domain(), etag);
writer.saveFeed(parseFeed(value, feed)); writer.saveFeed(parseFeed(value, feed));
feedJournal.record(feed.feedUrl(), value);
} }
case FetchResult.NotModified() -> { case FetchResult.NotModified() -> {
writer.saveEtag(feed.domain(), ifNoneMatchTag); writer.saveEtag(feed.domain(), ifNoneMatchTag);
@@ -224,6 +213,7 @@ public class FeedFetcherService {
private FetchResult fetchFeedData(FeedDefinition feed, private FetchResult fetchFeedData(FeedDefinition feed,
HttpClient client, HttpClient client,
ExecutorService executorService,
@Nullable String ifModifiedSinceDate, @Nullable String ifModifiedSinceDate,
@Nullable String ifNoneMatchTag) @Nullable String ifNoneMatchTag)
{ {
@@ -250,7 +240,14 @@ public class FeedFetcherService {
HttpRequest getRequest = requestBuilder.build(); HttpRequest getRequest = requestBuilder.build();
for (int i = 0; i < 3; i++) { for (int i = 0; i < 3; i++) {
HttpResponse<byte[]> rs = client.send(getRequest, HttpResponse.BodyHandlers.ofByteArray());
/* Note we need to use an executor to time-limit the send() method in HttpClient, as
* its support for timeouts only applies to the time until response starts to be received,
* and does not catch the case when the server starts to send data but then hangs.
*/
HttpResponse<byte[]> rs = executorService.submit(
() -> client.send(getRequest, HttpResponse.BodyHandlers.ofByteArray()))
.get(15, TimeUnit.SECONDS);
if (rs.statusCode() == 429) { // Too Many Requests if (rs.statusCode() == 429) { // Too Many Requests
int retryAfter = Integer.parseInt(rs.headers().firstValue("Retry-After").orElse("2")); int retryAfter = Integer.parseInt(rs.headers().firstValue("Retry-After").orElse("2"));
@@ -367,12 +364,7 @@ public class FeedFetcherService {
public FeedItems parseFeed(String feedData, FeedDefinition definition) { public FeedItems parseFeed(String feedData, FeedDefinition definition) {
try { try {
feedData = sanitizeEntities(feedData); List<SimpleFeedParser.ItemData> rawItems = SimpleFeedParser.parse(feedData);
List<Item> rawItems = rssReader.read(
// Massage the data to maximize the possibility of the flaky XML parser consuming it
new BOMInputStream(new ByteArrayInputStream(feedData.trim().getBytes(StandardCharsets.UTF_8)), false)
).toList();
boolean keepUriFragment = rawItems.size() < 2 || areFragmentsDisparate(rawItems); boolean keepUriFragment = rawItems.size() < 2 || areFragmentsDisparate(rawItems);
@@ -395,33 +387,6 @@ public class FeedFetcherService {
} }
} }
private static final Map<String, String> HTML_ENTITIES = Map.of(
"&raquo;", "»",
"&laquo;", "«",
"&mdash;", "--",
"&ndash;", "-",
"&rsquo;", "'",
"&lsquo;", "'",
"&quot;", "\"",
"&nbsp;", ""
);
/** The XML parser will blow up if you insert HTML entities in the feed XML,
* which is unfortunately relatively common. Replace them as far as is possible
* with their corresponding characters
*/
static String sanitizeEntities(String feedData) {
String result = feedData;
for (Map.Entry<String, String> entry : HTML_ENTITIES.entrySet()) {
result = result.replace(entry.getKey(), entry.getValue());
}
// Handle lone ampersands not part of a recognized XML entity
result = result.replaceAll("&(?!(amp|lt|gt|apos|quot);)", "&amp;");
return result;
}
/** Decide whether to keep URI fragments in the feed items. /** Decide whether to keep URI fragments in the feed items.
* <p></p> * <p></p>
* We keep fragments if there are multiple different fragments in the items. * We keep fragments if there are multiple different fragments in the items.
@@ -429,16 +394,16 @@ public class FeedFetcherService {
* @param items The items to check * @param items The items to check
* @return True if we should keep the fragments, false otherwise * @return True if we should keep the fragments, false otherwise
*/ */
private boolean areFragmentsDisparate(List<Item> items) { private boolean areFragmentsDisparate(List<SimpleFeedParser.ItemData> items) {
Set<String> seenFragments = new HashSet<>(); Set<String> seenFragments = new HashSet<>();
try { try {
for (var item : items) { for (var item : items) {
if (item.getLink().isEmpty()) { if (item.url().isBlank()) {
continue; continue;
} }
var link = item.getLink().get(); var link = item.url();
if (!link.contains("#")) { if (!link.contains("#")) {
continue; continue;
} }

View File

@@ -0,0 +1,76 @@
package nu.marginalia.rss.svc;
import nu.marginalia.WmsaHome;
import nu.marginalia.slop.SlopTable;
import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.desc.StorageType;
import org.apache.commons.io.FileUtils;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.function.BiConsumer;
/** Utility for recording fetched feeds to a journal, useful in debugging feed parser issues.
*/
public interface FeedJournal extends AutoCloseable {
StringColumn urlColumn = new StringColumn("url");
StringColumn contentsColumn = new StringColumn("contents", StandardCharsets.UTF_8, StorageType.ZSTD);
void record(String url, String contents) throws IOException;
void close() throws IOException;
static FeedJournal create() throws IOException {
if (Boolean.getBoolean("feedFetcher.persistJournal")) {
Path journalPath = WmsaHome.getDataPath().resolve("feed-journal");
if (Files.isDirectory(journalPath)) {
FileUtils.deleteDirectory(journalPath.toFile());
}
Files.createDirectories(journalPath);
return new RecordingFeedJournal(journalPath);
}
else {
return new NoOpFeedJournal();
}
}
class NoOpFeedJournal implements FeedJournal {
@Override
public void record(String url, String contents) {}
@Override
public void close() {}
}
class RecordingFeedJournal extends SlopTable implements FeedJournal {
private final StringColumn.Writer urlWriter;
private final StringColumn.Writer contentsWriter;
public RecordingFeedJournal(Path path) throws IOException {
super(path, SlopTable.getNumPages(path, FeedJournal.urlColumn));
urlWriter = urlColumn.create(this);
contentsWriter = contentsColumn.create(this);
}
public synchronized void record(String url, String contents) throws IOException {
urlWriter.put(url);
contentsWriter.put(contents);
}
}
static void replay(Path journalPath, BiConsumer<String, String> urlAndContent) throws IOException {
try (SlopTable table = new SlopTable(journalPath)) {
final StringColumn.Reader urlReader = urlColumn.open(table);
final StringColumn.Reader contentsReader = contentsColumn.open(table);
while (urlReader.hasRemaining()) {
urlAndContent.accept(urlReader.get(), contentsReader.get());
}
}
}
}

View File

@@ -0,0 +1,94 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.DateTimeParser;
import com.apptasticsoftware.rssreader.util.Default;
import org.jsoup.Jsoup;
import org.jsoup.parser.Parser;
import java.time.ZonedDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
public class SimpleFeedParser {
private static final DateTimeParser dateTimeParser = Default.getDateTimeParser();
public record ItemData (
String title,
String description,
String url,
String pubDate
) {
public boolean isWellFormed() {
return title != null && !title.isBlank() &&
description != null && !description.isBlank() &&
url != null && !url.isBlank() &&
pubDate != null && !pubDate.isBlank();
}
public Optional<ZonedDateTime> getPubDateZonedDateTime() {
try {
return Optional.ofNullable(dateTimeParser.parse(pubDate()));
}
catch (Exception e) {
return Optional.empty();
}
}
}
public static List<ItemData> parse(String content) {
var doc = Jsoup.parse(content, Parser.xmlParser());
List<ItemData> ret = new ArrayList<>();
doc.select("item, entry").forEach(element -> {
String link = "";
String title = "";
String description = "";
String pubDate = "";
for (String attr : List.of("title", "dc:title")) {
if (!title.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
title = tag.text();
}
}
for (String attr : List.of("title", "summary", "content", "description", "dc:description")) {
if (!description.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
description = tag.text();
}
}
for (String attr : List.of("pubDate", "published", "updated", "issued", "created", "dc:date")) {
if (!pubDate.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
pubDate = tag.text();
}
}
for (String attr : List.of("link", "url")) {
if (!link.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
link = tag.text();
}
}
ret.add(new ItemData(title, description, link, pubDate));
});
return ret;
}
}

View File

@@ -1,36 +1,97 @@
package nu.marginalia.livecapture; package nu.marginalia.livecapture;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.WmsaHome;
import nu.marginalia.service.module.ServiceConfigurationModule;
import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll; import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Testcontainers; import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName; import org.testcontainers.utility.DockerImageName;
import java.io.IOException;
import java.net.URI; import java.net.URI;
import java.util.Map;
import static com.github.tomakehurst.wiremock.client.WireMock.*;
@Testcontainers @Testcontainers
@Tag("slow")
public class BrowserlessClientTest { public class BrowserlessClientTest {
static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome")).withExposedPorts(3000); static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome"))
.withEnv(Map.of("TOKEN", "BROWSERLESS_TOKEN"))
.withNetworkMode("bridge")
.withExposedPorts(3000);
static WireMockServer wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
static String localIp;
static URI browserlessURI;
@BeforeAll @BeforeAll
public static void setup() { public static void setup() throws IOException {
container.start(); container.start();
browserlessURI = URI.create(String.format("http://%s:%d/",
container.getHost(),
container.getMappedPort(3000))
);
wireMockServer.start();
wireMockServer.stubFor(get("/").willReturn(aResponse().withStatus(200).withBody("Ok")));
localIp = ServiceConfigurationModule.getLocalNetworkIP();
}
@Tag("flaky")
@Test
public void testInspectContentUA__Flaky() throws Exception {
try (var client = new BrowserlessClient(browserlessURI)) {
client.content("http://" + localIp + ":18089/",
BrowserlessClient.GotoOptions.defaultValues()
);
}
wireMockServer.verify(getRequestedFor(urlEqualTo("/")).withHeader("User-Agent", equalTo(WmsaHome.getUserAgent().uaString())));
}
@Tag("flaky")
@Test
public void testInspectScreenshotUA__Flaky() throws Exception {
try (var client = new BrowserlessClient(browserlessURI)) {
client.screenshot("http://" + localIp + ":18089/",
BrowserlessClient.GotoOptions.defaultValues(),
BrowserlessClient.ScreenshotOptions.defaultValues()
);
}
wireMockServer.verify(getRequestedFor(urlEqualTo("/")).withHeader("User-Agent", equalTo(WmsaHome.getUserAgent().uaString())));
} }
@Test @Test
public void testContent() throws Exception { public void testContent() throws Exception {
try (var client = new BrowserlessClient(URI.create("http://" + container.getHost() + ":" + container.getMappedPort(3000)))) { try (var client = new BrowserlessClient(browserlessURI)) {
var content = client.content("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues()); var content = client.content("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues()).orElseThrow();
Assertions.assertNotNull(content, "Content should not be null");
Assertions.assertFalse(content.isBlank(), "Content should not be empty"); Assertions.assertFalse(content.isBlank(), "Content should not be empty");
} }
} }
@Test @Test
public void testScreenshot() throws Exception { public void testScreenshot() throws Exception {
try (var client = new BrowserlessClient(URI.create("http://" + container.getHost() + ":" + container.getMappedPort(3000)))) { try (var client = new BrowserlessClient(browserlessURI)) {
var screenshot = client.screenshot("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues(), BrowserlessClient.ScreenshotOptions.defaultValues()); var screenshot = client.screenshot("https://www.marginalia.nu/",
BrowserlessClient.GotoOptions.defaultValues(),
BrowserlessClient.ScreenshotOptions.defaultValues());
Assertions.assertNotNull(screenshot, "Screenshot should not be null"); Assertions.assertNotNull(screenshot, "Screenshot should not be null");
} }
} }

View File

@@ -1,50 +0,0 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.Item;
import com.apptasticsoftware.rssreader.RssReader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import java.util.List;
import java.util.Optional;
public class TestXmlSanitization {
@Test
public void testPreservedEntities() {
Assertions.assertEquals("&amp;", FeedFetcherService.sanitizeEntities("&amp;"));
Assertions.assertEquals("&lt;", FeedFetcherService.sanitizeEntities("&lt;"));
Assertions.assertEquals("&gt;", FeedFetcherService.sanitizeEntities("&gt;"));
Assertions.assertEquals("&apos;", FeedFetcherService.sanitizeEntities("&apos;"));
}
@Test
public void testNlnetTitleTag() {
// The NLnet atom feed puts HTML tags in the entry/title tags, which breaks the vanilla RssReader code
// Verify we're able to consume and strip out the HTML tags
RssReader r = new RssReader();
List<Item> items = r.read(ClassLoader.getSystemResourceAsStream("nlnet.atom")).toList();
Assertions.assertEquals(1, items.size());
for (var item : items) {
Assertions.assertEquals(Optional.of("50 Free and Open Source Projects Selected for NGI Zero grants"), item.getTitle());
}
}
@Test
public void testStrayAmpersand() {
Assertions.assertEquals("Bed &amp; Breakfast", FeedFetcherService.sanitizeEntities("Bed & Breakfast"));
}
@Test
public void testTranslatedHtmlEntity() {
Assertions.assertEquals("Foo -- Bar", FeedFetcherService.sanitizeEntities("Foo &mdash; Bar"));
}
@Test
public void testTranslatedHtmlEntityQuot() {
Assertions.assertEquals("\"Bob\"", FeedFetcherService.sanitizeEntities("&quot;Bob&quot;"));
}
}

View File

@@ -2,9 +2,6 @@ package nu.marginalia.api.searchquery;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint; import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery; import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.results.Bm25Parameters;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.index.query.limit.SpecificationLimitType; import nu.marginalia.index.query.limit.SpecificationLimitType;
@@ -27,37 +24,19 @@ public class IndexProtobufCodec {
.build(); .build();
} }
public static QueryLimits convertQueryLimits(RpcQueryLimits queryLimits) {
return new QueryLimits(
queryLimits.getResultsByDomain(),
queryLimits.getResultsTotal(),
queryLimits.getTimeoutMs(),
queryLimits.getFetchSize()
);
}
public static RpcQueryLimits convertQueryLimits(QueryLimits queryLimits) {
return RpcQueryLimits.newBuilder()
.setResultsByDomain(queryLimits.resultsByDomain())
.setResultsTotal(queryLimits.resultsTotal())
.setTimeoutMs(queryLimits.timeoutMs())
.setFetchSize(queryLimits.fetchSize())
.build();
}
public static SearchQuery convertRpcQuery(RpcQuery query) { public static SearchQuery convertRpcQuery(RpcQuery query) {
List<SearchPhraseConstraint> phraeConstraints = new ArrayList<>(); List<SearchPhraseConstraint> phraseConstraints = new ArrayList<>();
for (int j = 0; j < query.getPhrasesCount(); j++) { for (int j = 0; j < query.getPhrasesCount(); j++) {
var coh = query.getPhrases(j); var coh = query.getPhrases(j);
if (coh.getType() == RpcPhrases.TYPE.OPTIONAL) { if (coh.getType() == RpcPhrases.TYPE.OPTIONAL) {
phraeConstraints.add(new SearchPhraseConstraint.Optional(List.copyOf(coh.getTermsList()))); phraseConstraints.add(new SearchPhraseConstraint.Optional(List.copyOf(coh.getTermsList())));
} }
else if (coh.getType() == RpcPhrases.TYPE.MANDATORY) { else if (coh.getType() == RpcPhrases.TYPE.MANDATORY) {
phraeConstraints.add(new SearchPhraseConstraint.Mandatory(List.copyOf(coh.getTermsList()))); phraseConstraints.add(new SearchPhraseConstraint.Mandatory(List.copyOf(coh.getTermsList())));
} }
else if (coh.getType() == RpcPhrases.TYPE.FULL) { else if (coh.getType() == RpcPhrases.TYPE.FULL) {
phraeConstraints.add(new SearchPhraseConstraint.Full(List.copyOf(coh.getTermsList()))); phraseConstraints.add(new SearchPhraseConstraint.Full(List.copyOf(coh.getTermsList())));
} }
else { else {
throw new IllegalArgumentException("Unknown phrase constraint type: " + coh.getType()); throw new IllegalArgumentException("Unknown phrase constraint type: " + coh.getType());
@@ -70,7 +49,7 @@ public class IndexProtobufCodec {
query.getExcludeList(), query.getExcludeList(),
query.getAdviceList(), query.getAdviceList(),
query.getPriorityList(), query.getPriorityList(),
phraeConstraints phraseConstraints
); );
} }
@@ -103,60 +82,4 @@ public class IndexProtobufCodec {
return subqueryBuilder.build(); return subqueryBuilder.build();
} }
public static ResultRankingParameters convertRankingParameterss(RpcResultRankingParameters params) {
if (params == null)
return ResultRankingParameters.sensibleDefaults();
return new ResultRankingParameters(
new Bm25Parameters(params.getBm25K(), params.getBm25B()),
params.getShortDocumentThreshold(),
params.getShortDocumentPenalty(),
params.getDomainRankBonus(),
params.getQualityPenalty(),
params.getShortSentenceThreshold(),
params.getShortSentencePenalty(),
params.getBm25Weight(),
params.getTcfFirstPositionWeight(),
params.getTcfVerbatimWeight(),
params.getTcfProximityWeight(),
ResultRankingParameters.TemporalBias.valueOf(params.getTemporalBias().getBias().name()),
params.getTemporalBiasWeight(),
params.getExportDebugData()
);
}
public static RpcResultRankingParameters convertRankingParameterss(ResultRankingParameters rankingParams,
RpcTemporalBias temporalBias)
{
if (rankingParams == null) {
rankingParams = ResultRankingParameters.sensibleDefaults();
}
var builder = RpcResultRankingParameters.newBuilder()
.setBm25B(rankingParams.bm25Params.b())
.setBm25K(rankingParams.bm25Params.k())
.setShortDocumentThreshold(rankingParams.shortDocumentThreshold)
.setShortDocumentPenalty(rankingParams.shortDocumentPenalty)
.setDomainRankBonus(rankingParams.domainRankBonus)
.setQualityPenalty(rankingParams.qualityPenalty)
.setShortSentenceThreshold(rankingParams.shortSentenceThreshold)
.setShortSentencePenalty(rankingParams.shortSentencePenalty)
.setBm25Weight(rankingParams.bm25Weight)
.setTcfFirstPositionWeight(rankingParams.tcfFirstPosition)
.setTcfProximityWeight(rankingParams.tcfProximity)
.setTcfVerbatimWeight(rankingParams.tcfVerbatim)
.setTemporalBiasWeight(rankingParams.temporalBiasWeight)
.setExportDebugData(rankingParams.exportDebugData);
if (temporalBias != null && temporalBias.getBias() != RpcTemporalBias.Bias.NONE) {
builder.setTemporalBias(temporalBias);
}
else {
builder.setTemporalBias(RpcTemporalBias.newBuilder()
.setBias(RpcTemporalBias.Bias.valueOf(rankingParams.temporalBias.name())));
}
return builder.build();
}
} }

View File

@@ -5,7 +5,7 @@ import nu.marginalia.api.searchquery.model.query.QueryParams;
import nu.marginalia.api.searchquery.model.query.QueryResponse; import nu.marginalia.api.searchquery.model.query.QueryResponse;
import nu.marginalia.api.searchquery.model.query.SearchSpecification; import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.DecoratedSearchResultItem; import nu.marginalia.api.searchquery.model.results.DecoratedSearchResultItem;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.api.searchquery.model.results.SearchResultItem; import nu.marginalia.api.searchquery.model.results.SearchResultItem;
import nu.marginalia.api.searchquery.model.results.SearchResultKeywordScore; import nu.marginalia.api.searchquery.model.results.SearchResultKeywordScore;
import nu.marginalia.api.searchquery.model.results.debug.DebugFactor; import nu.marginalia.api.searchquery.model.results.debug.DebugFactor;
@@ -37,7 +37,7 @@ public class QueryProtobufCodec {
builder.setSize(IndexProtobufCodec.convertSpecLimit(query.specs.size)); builder.setSize(IndexProtobufCodec.convertSpecLimit(query.specs.size));
builder.setRank(IndexProtobufCodec.convertSpecLimit(query.specs.rank)); builder.setRank(IndexProtobufCodec.convertSpecLimit(query.specs.rank));
builder.setQueryLimits(IndexProtobufCodec.convertQueryLimits(query.specs.queryLimits)); builder.setQueryLimits(query.specs.queryLimits);
// Query strategy may be overridden by the query, but if not, use the one from the request // Query strategy may be overridden by the query, but if not, use the one from the request
if (query.specs.queryStrategy != null && query.specs.queryStrategy != QueryStrategy.AUTO) if (query.specs.queryStrategy != null && query.specs.queryStrategy != QueryStrategy.AUTO)
@@ -45,9 +45,27 @@ public class QueryProtobufCodec {
else else
builder.setQueryStrategy(request.getQueryStrategy()); builder.setQueryStrategy(request.getQueryStrategy());
if (query.specs.rankingParams != null) { if (request.getTemporalBias().getBias() != RpcTemporalBias.Bias.NONE) {
builder.setParameters(IndexProtobufCodec.convertRankingParameterss(query.specs.rankingParams, request.getTemporalBias())); if (query.specs.rankingParams != null) {
builder.setParameters(
RpcResultRankingParameters.newBuilder(query.specs.rankingParams)
.setTemporalBias(request.getTemporalBias())
.build()
);
} else {
builder.setParameters(
RpcResultRankingParameters.newBuilder(PrototypeRankingParameters.sensibleDefaults())
.setTemporalBias(request.getTemporalBias())
.build()
);
}
} else if (query.specs.rankingParams != null) {
builder.setParameters(query.specs.rankingParams);
} }
// else {
// if we have no ranking params, we don't need to set them, the client check and use the default values
// so we don't need to send this huge object over the wire
// }
return builder.build(); return builder.build();
} }
@@ -65,18 +83,13 @@ public class QueryProtobufCodec {
builder.setSize(IndexProtobufCodec.convertSpecLimit(query.specs.size)); builder.setSize(IndexProtobufCodec.convertSpecLimit(query.specs.size));
builder.setRank(IndexProtobufCodec.convertSpecLimit(query.specs.rank)); builder.setRank(IndexProtobufCodec.convertSpecLimit(query.specs.rank));
builder.setQueryLimits(IndexProtobufCodec.convertQueryLimits(query.specs.queryLimits)); builder.setQueryLimits(query.specs.queryLimits);
// Query strategy may be overridden by the query, but if not, use the one from the request // Query strategy may be overridden by the query, but if not, use the one from the request
builder.setQueryStrategy(query.specs.queryStrategy.name()); builder.setQueryStrategy(query.specs.queryStrategy.name());
if (query.specs.rankingParams != null) { if (query.specs.rankingParams != null) {
builder.setParameters(IndexProtobufCodec.convertRankingParameterss( builder.setParameters(query.specs.rankingParams);
query.specs.rankingParams,
RpcTemporalBias.newBuilder().setBias(
RpcTemporalBias.Bias.NONE)
.build())
);
} }
return builder.build(); return builder.build();
@@ -95,10 +108,10 @@ public class QueryProtobufCodec {
IndexProtobufCodec.convertSpecLimit(request.getSize()), IndexProtobufCodec.convertSpecLimit(request.getSize()),
IndexProtobufCodec.convertSpecLimit(request.getRank()), IndexProtobufCodec.convertSpecLimit(request.getRank()),
request.getDomainIdsList(), request.getDomainIdsList(),
IndexProtobufCodec.convertQueryLimits(request.getQueryLimits()), request.getQueryLimits(),
request.getSearchSetIdentifier(), request.getSearchSetIdentifier(),
QueryStrategy.valueOf(request.getQueryStrategy()), QueryStrategy.valueOf(request.getQueryStrategy()),
ResultRankingParameters.TemporalBias.valueOf(request.getTemporalBias().getBias().name()), RpcTemporalBias.Bias.valueOf(request.getTemporalBias().getBias().name()),
request.getPagination().getPage() request.getPagination().getPage()
); );
} }
@@ -294,9 +307,9 @@ public class QueryProtobufCodec {
IndexProtobufCodec.convertSpecLimit(specs.getYear()), IndexProtobufCodec.convertSpecLimit(specs.getYear()),
IndexProtobufCodec.convertSpecLimit(specs.getSize()), IndexProtobufCodec.convertSpecLimit(specs.getSize()),
IndexProtobufCodec.convertSpecLimit(specs.getRank()), IndexProtobufCodec.convertSpecLimit(specs.getRank()),
IndexProtobufCodec.convertQueryLimits(specs.getQueryLimits()), specs.getQueryLimits(),
QueryStrategy.valueOf(specs.getQueryStrategy()), QueryStrategy.valueOf(specs.getQueryStrategy()),
IndexProtobufCodec.convertRankingParameterss(specs.getParameters()) specs.hasParameters() ? specs.getParameters() : null
); );
} }
@@ -307,7 +320,7 @@ public class QueryProtobufCodec {
.addAllTacitExcludes(params.tacitExcludes()) .addAllTacitExcludes(params.tacitExcludes())
.addAllTacitPriority(params.tacitPriority()) .addAllTacitPriority(params.tacitPriority())
.setHumanQuery(params.humanQuery()) .setHumanQuery(params.humanQuery())
.setQueryLimits(IndexProtobufCodec.convertQueryLimits(params.limits())) .setQueryLimits(params.limits())
.setQuality(IndexProtobufCodec.convertSpecLimit(params.quality())) .setQuality(IndexProtobufCodec.convertSpecLimit(params.quality()))
.setYear(IndexProtobufCodec.convertSpecLimit(params.year())) .setYear(IndexProtobufCodec.convertSpecLimit(params.year()))
.setSize(IndexProtobufCodec.convertSpecLimit(params.size())) .setSize(IndexProtobufCodec.convertSpecLimit(params.size()))
@@ -319,7 +332,7 @@ public class QueryProtobufCodec {
.build()) .build())
.setPagination(RpcQsQueryPagination.newBuilder() .setPagination(RpcQsQueryPagination.newBuilder()
.setPage(params.page()) .setPage(params.page())
.setPageSize(Math.min(100, params.limits().resultsTotal())) .setPageSize(Math.min(100, params.limits().getResultsTotal()))
.build()); .build());
if (params.nearDomain() != null) if (params.nearDomain() != null)

View File

@@ -1,7 +1,7 @@
package nu.marginalia.api.searchquery.model.query; package nu.marginalia.api.searchquery.model.query;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.index.query.limit.QueryLimits; import nu.marginalia.api.searchquery.RpcTemporalBias;
import nu.marginalia.index.query.limit.QueryStrategy; import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
@@ -21,14 +21,14 @@ public record QueryParams(
SpecificationLimit size, SpecificationLimit size,
SpecificationLimit rank, SpecificationLimit rank,
List<Integer> domainIds, List<Integer> domainIds,
QueryLimits limits, RpcQueryLimits limits,
String identifier, String identifier,
QueryStrategy queryStrategy, QueryStrategy queryStrategy,
ResultRankingParameters.TemporalBias temporalBias, RpcTemporalBias.Bias temporalBias,
int page int page
) )
{ {
public QueryParams(String query, QueryLimits limits, String identifier) { public QueryParams(String query, RpcQueryLimits limits, String identifier) {
this(query, null, this(query, null,
List.of(), List.of(),
List.of(), List.of(),
@@ -42,7 +42,7 @@ public record QueryParams(
limits, limits,
identifier, identifier,
QueryStrategy.AUTO, QueryStrategy.AUTO,
ResultRankingParameters.TemporalBias.NONE, RpcTemporalBias.Bias.NONE,
1 // page 1 // page
); );
} }

View File

@@ -1,10 +1,11 @@
package nu.marginalia.api.searchquery.model.query; package nu.marginalia.api.searchquery.model.query;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.index.query.limit.QueryLimits; import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.index.query.limit.QueryStrategy; import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
import javax.annotation.Nullable;
import java.util.List; import java.util.List;
public class SearchSpecification { public class SearchSpecification {
@@ -24,11 +25,12 @@ public class SearchSpecification {
public SpecificationLimit size; public SpecificationLimit size;
public SpecificationLimit rank; public SpecificationLimit rank;
public final QueryLimits queryLimits; public final RpcQueryLimits queryLimits;
public final QueryStrategy queryStrategy; public final QueryStrategy queryStrategy;
public final ResultRankingParameters rankingParams; @Nullable
public final RpcResultRankingParameters rankingParams;
public SearchSpecification(SearchQuery query, public SearchSpecification(SearchQuery query,
List<Integer> domains, List<Integer> domains,
@@ -38,9 +40,9 @@ public class SearchSpecification {
SpecificationLimit year, SpecificationLimit year,
SpecificationLimit size, SpecificationLimit size,
SpecificationLimit rank, SpecificationLimit rank,
QueryLimits queryLimits, RpcQueryLimits queryLimits,
QueryStrategy queryStrategy, QueryStrategy queryStrategy,
ResultRankingParameters rankingParams) @Nullable RpcResultRankingParameters rankingParams)
{ {
this.query = query; this.query = query;
this.domains = domains; this.domains = domains;
@@ -91,7 +93,7 @@ public class SearchSpecification {
return this.rank; return this.rank;
} }
public QueryLimits getQueryLimits() { public RpcQueryLimits getQueryLimits() {
return this.queryLimits; return this.queryLimits;
} }
@@ -99,7 +101,7 @@ public class SearchSpecification {
return this.queryStrategy; return this.queryStrategy;
} }
public ResultRankingParameters getRankingParams() { public RpcResultRankingParameters getRankingParams() {
return this.rankingParams; return this.rankingParams;
} }
@@ -120,9 +122,9 @@ public class SearchSpecification {
private boolean size$set; private boolean size$set;
private SpecificationLimit rank$value; private SpecificationLimit rank$value;
private boolean rank$set; private boolean rank$set;
private QueryLimits queryLimits; private RpcQueryLimits queryLimits;
private QueryStrategy queryStrategy; private QueryStrategy queryStrategy;
private ResultRankingParameters rankingParams; private RpcResultRankingParameters rankingParams;
SearchSpecificationBuilder() { SearchSpecificationBuilder() {
} }
@@ -171,7 +173,7 @@ public class SearchSpecification {
return this; return this;
} }
public SearchSpecificationBuilder queryLimits(QueryLimits queryLimits) { public SearchSpecificationBuilder queryLimits(RpcQueryLimits queryLimits) {
this.queryLimits = queryLimits; this.queryLimits = queryLimits;
return this; return this;
} }
@@ -181,7 +183,7 @@ public class SearchSpecification {
return this; return this;
} }
public SearchSpecificationBuilder rankingParams(ResultRankingParameters rankingParams) { public SearchSpecificationBuilder rankingParams(RpcResultRankingParameters rankingParams) {
this.rankingParams = rankingParams; this.rankingParams = rankingParams;
return this; return this;
} }

View File

@@ -0,0 +1,33 @@
package nu.marginalia.api.searchquery.model.results;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.RpcTemporalBias;
public class PrototypeRankingParameters {
/** These are the default ranking parameters that are used when no parameters are specified. */
private static final RpcResultRankingParameters _sensibleDefaults = RpcResultRankingParameters.newBuilder()
.setBm25B(0.5)
.setBm25K(1.2)
.setShortDocumentThreshold(2000)
.setShortDocumentPenalty(2.)
.setDomainRankBonus(1 / 100.)
.setQualityPenalty(1 / 15.)
.setShortSentenceThreshold(2)
.setShortSentencePenalty(5)
.setBm25Weight(1.)
.setTcfVerbatimWeight(1.)
.setTcfProximityWeight(1.)
.setTcfFirstPositionWeight(5)
.setTemporalBias(RpcTemporalBias.newBuilder().setBias(RpcTemporalBias.Bias.NONE))
.setTemporalBiasWeight(5.0)
.setExportDebugData(false)
.setDisablePenalties(false)
.build();
public static RpcResultRankingParameters sensibleDefaults() {
return _sensibleDefaults;
}
}

View File

@@ -1,12 +1,13 @@
package nu.marginalia.api.searchquery.model.results; package nu.marginalia.api.searchquery.model.results;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt; import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import java.util.BitSet; import java.util.BitSet;
public class ResultRankingContext { public class ResultRankingContext {
private final int docCount; private final int docCount;
public final ResultRankingParameters params; public final RpcResultRankingParameters params;
public final BitSet regularMask; public final BitSet regularMask;
@@ -21,7 +22,7 @@ public class ResultRankingContext {
public final CqDataInt priorityCounts; public final CqDataInt priorityCounts;
public ResultRankingContext(int docCount, public ResultRankingContext(int docCount,
ResultRankingParameters params, RpcResultRankingParameters params,
BitSet ngramsMask, BitSet ngramsMask,
BitSet regularMask, BitSet regularMask,
CqDataInt fullCounts, CqDataInt fullCounts,

View File

@@ -1,278 +0,0 @@
package nu.marginalia.api.searchquery.model.results;
import java.util.Objects;
public class ResultRankingParameters {
/**
* Tuning for BM25 when applied to full document matches
*/
public final Bm25Parameters bm25Params;
/**
* Documents below this length are penalized
*/
public int shortDocumentThreshold;
public double shortDocumentPenalty;
/**
* Scaling factor associated with domain rank (unscaled rank value is 0-255; high is good)
*/
public double domainRankBonus;
/**
* Scaling factor associated with document quality (unscaled rank value is 0-15; high is bad)
*/
public double qualityPenalty;
/**
* Average sentence length values below this threshold are penalized, range [0-4), 2 or 3 is probably what you want
*/
public int shortSentenceThreshold;
/**
* Magnitude of penalty for documents with low average sentence length
*/
public double shortSentencePenalty;
public double bm25Weight;
public double tcfFirstPosition;
public double tcfVerbatim;
public double tcfProximity;
public TemporalBias temporalBias;
public double temporalBiasWeight;
public boolean exportDebugData;
public ResultRankingParameters(Bm25Parameters bm25Params, int shortDocumentThreshold, double shortDocumentPenalty, double domainRankBonus, double qualityPenalty, int shortSentenceThreshold, double shortSentencePenalty, double bm25Weight, double tcfFirstPosition, double tcfVerbatim, double tcfProximity, TemporalBias temporalBias, double temporalBiasWeight, boolean exportDebugData) {
this.bm25Params = bm25Params;
this.shortDocumentThreshold = shortDocumentThreshold;
this.shortDocumentPenalty = shortDocumentPenalty;
this.domainRankBonus = domainRankBonus;
this.qualityPenalty = qualityPenalty;
this.shortSentenceThreshold = shortSentenceThreshold;
this.shortSentencePenalty = shortSentencePenalty;
this.bm25Weight = bm25Weight;
this.tcfFirstPosition = tcfFirstPosition;
this.tcfVerbatim = tcfVerbatim;
this.tcfProximity = tcfProximity;
this.temporalBias = temporalBias;
this.temporalBiasWeight = temporalBiasWeight;
this.exportDebugData = exportDebugData;
}
public static ResultRankingParameters sensibleDefaults() {
return builder()
.bm25Params(new Bm25Parameters(1.2, 0.5))
.shortDocumentThreshold(2000)
.shortDocumentPenalty(2.)
.domainRankBonus(1 / 100.)
.qualityPenalty(1 / 15.)
.shortSentenceThreshold(2)
.shortSentencePenalty(5)
.bm25Weight(1.)
.tcfVerbatim(1.)
.tcfProximity(1.)
.tcfFirstPosition(5)
.temporalBias(TemporalBias.NONE)
.temporalBiasWeight(5.0)
.exportDebugData(false)
.build();
}
public static ResultRankingParametersBuilder builder() {
return new ResultRankingParametersBuilder();
}
public Bm25Parameters getBm25Params() {
return this.bm25Params;
}
public int getShortDocumentThreshold() {
return this.shortDocumentThreshold;
}
public double getShortDocumentPenalty() {
return this.shortDocumentPenalty;
}
public double getDomainRankBonus() {
return this.domainRankBonus;
}
public double getQualityPenalty() {
return this.qualityPenalty;
}
public int getShortSentenceThreshold() {
return this.shortSentenceThreshold;
}
public double getShortSentencePenalty() {
return this.shortSentencePenalty;
}
public double getBm25Weight() {
return this.bm25Weight;
}
public double getTcfFirstPosition() {
return this.tcfFirstPosition;
}
public double getTcfVerbatim() {
return this.tcfVerbatim;
}
public double getTcfProximity() {
return this.tcfProximity;
}
public TemporalBias getTemporalBias() {
return this.temporalBias;
}
public double getTemporalBiasWeight() {
return this.temporalBiasWeight;
}
public boolean isExportDebugData() {
return this.exportDebugData;
}
@Override
public final boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof ResultRankingParameters that)) return false;
return shortDocumentThreshold == that.shortDocumentThreshold && Double.compare(shortDocumentPenalty, that.shortDocumentPenalty) == 0 && Double.compare(domainRankBonus, that.domainRankBonus) == 0 && Double.compare(qualityPenalty, that.qualityPenalty) == 0 && shortSentenceThreshold == that.shortSentenceThreshold && Double.compare(shortSentencePenalty, that.shortSentencePenalty) == 0 && Double.compare(bm25Weight, that.bm25Weight) == 0 && Double.compare(tcfFirstPosition, that.tcfFirstPosition) == 0 && Double.compare(tcfVerbatim, that.tcfVerbatim) == 0 && Double.compare(tcfProximity, that.tcfProximity) == 0 && Double.compare(temporalBiasWeight, that.temporalBiasWeight) == 0 && exportDebugData == that.exportDebugData && Objects.equals(bm25Params, that.bm25Params) && temporalBias == that.temporalBias;
}
@Override
public int hashCode() {
int result = Objects.hashCode(bm25Params);
result = 31 * result + shortDocumentThreshold;
result = 31 * result + Double.hashCode(shortDocumentPenalty);
result = 31 * result + Double.hashCode(domainRankBonus);
result = 31 * result + Double.hashCode(qualityPenalty);
result = 31 * result + shortSentenceThreshold;
result = 31 * result + Double.hashCode(shortSentencePenalty);
result = 31 * result + Double.hashCode(bm25Weight);
result = 31 * result + Double.hashCode(tcfFirstPosition);
result = 31 * result + Double.hashCode(tcfVerbatim);
result = 31 * result + Double.hashCode(tcfProximity);
result = 31 * result + Objects.hashCode(temporalBias);
result = 31 * result + Double.hashCode(temporalBiasWeight);
result = 31 * result + Boolean.hashCode(exportDebugData);
return result;
}
public String toString() {
return "ResultRankingParameters(bm25Params=" + this.getBm25Params() + ", shortDocumentThreshold=" + this.getShortDocumentThreshold() + ", shortDocumentPenalty=" + this.getShortDocumentPenalty() + ", domainRankBonus=" + this.getDomainRankBonus() + ", qualityPenalty=" + this.getQualityPenalty() + ", shortSentenceThreshold=" + this.getShortSentenceThreshold() + ", shortSentencePenalty=" + this.getShortSentencePenalty() + ", bm25Weight=" + this.getBm25Weight() + ", tcfFirstPosition=" + this.getTcfFirstPosition() + ", tcfVerbatim=" + this.getTcfVerbatim() + ", tcfProximity=" + this.getTcfProximity() + ", temporalBias=" + this.getTemporalBias() + ", temporalBiasWeight=" + this.getTemporalBiasWeight() + ", exportDebugData=" + this.isExportDebugData() + ")";
}
public enum TemporalBias {
RECENT, OLD, NONE
}
public static class ResultRankingParametersBuilder {
private Bm25Parameters bm25Params;
private int shortDocumentThreshold;
private double shortDocumentPenalty;
private double domainRankBonus;
private double qualityPenalty;
private int shortSentenceThreshold;
private double shortSentencePenalty;
private double bm25Weight;
private double tcfFirstPosition;
private double tcfVerbatim;
private double tcfProximity;
private TemporalBias temporalBias;
private double temporalBiasWeight;
private boolean exportDebugData;
ResultRankingParametersBuilder() {
}
public ResultRankingParametersBuilder bm25Params(Bm25Parameters bm25Params) {
this.bm25Params = bm25Params;
return this;
}
public ResultRankingParametersBuilder shortDocumentThreshold(int shortDocumentThreshold) {
this.shortDocumentThreshold = shortDocumentThreshold;
return this;
}
public ResultRankingParametersBuilder shortDocumentPenalty(double shortDocumentPenalty) {
this.shortDocumentPenalty = shortDocumentPenalty;
return this;
}
public ResultRankingParametersBuilder domainRankBonus(double domainRankBonus) {
this.domainRankBonus = domainRankBonus;
return this;
}
public ResultRankingParametersBuilder qualityPenalty(double qualityPenalty) {
this.qualityPenalty = qualityPenalty;
return this;
}
public ResultRankingParametersBuilder shortSentenceThreshold(int shortSentenceThreshold) {
this.shortSentenceThreshold = shortSentenceThreshold;
return this;
}
public ResultRankingParametersBuilder shortSentencePenalty(double shortSentencePenalty) {
this.shortSentencePenalty = shortSentencePenalty;
return this;
}
public ResultRankingParametersBuilder bm25Weight(double bm25Weight) {
this.bm25Weight = bm25Weight;
return this;
}
public ResultRankingParametersBuilder tcfFirstPosition(double tcfFirstPosition) {
this.tcfFirstPosition = tcfFirstPosition;
return this;
}
public ResultRankingParametersBuilder tcfVerbatim(double tcfVerbatim) {
this.tcfVerbatim = tcfVerbatim;
return this;
}
public ResultRankingParametersBuilder tcfProximity(double tcfProximity) {
this.tcfProximity = tcfProximity;
return this;
}
public ResultRankingParametersBuilder temporalBias(TemporalBias temporalBias) {
this.temporalBias = temporalBias;
return this;
}
public ResultRankingParametersBuilder temporalBiasWeight(double temporalBiasWeight) {
this.temporalBiasWeight = temporalBiasWeight;
return this;
}
public ResultRankingParametersBuilder exportDebugData(boolean exportDebugData) {
this.exportDebugData = exportDebugData;
return this;
}
public ResultRankingParameters build() {
return new ResultRankingParameters(this.bm25Params, this.shortDocumentThreshold, this.shortDocumentPenalty, this.domainRankBonus, this.qualityPenalty, this.shortSentenceThreshold, this.shortSentencePenalty, this.bm25Weight, this.tcfFirstPosition, this.tcfVerbatim, this.tcfProximity, this.temporalBias, this.temporalBiasWeight, this.exportDebugData);
}
public String toString() {
return "ResultRankingParameters.ResultRankingParametersBuilder(bm25Params=" + this.bm25Params + ", shortDocumentThreshold=" + this.shortDocumentThreshold + ", shortDocumentPenalty=" + this.shortDocumentPenalty + ", domainRankBonus=" + this.domainRankBonus + ", qualityPenalty=" + this.qualityPenalty + ", shortSentenceThreshold=" + this.shortSentenceThreshold + ", shortSentencePenalty=" + this.shortSentencePenalty + ", bm25Weight=" + this.bm25Weight + ", tcfFirstPosition=" + this.tcfFirstPosition + ", tcfVerbatim=" + this.tcfVerbatim + ", tcfProximity=" + this.tcfProximity + ", temporalBias=" + this.temporalBias + ", temporalBiasWeight=" + this.temporalBiasWeight + ", exportDebugData=" + this.exportDebugData + ")";
}
}
}

View File

@@ -162,6 +162,7 @@ message RpcResultRankingParameters {
double temporalBiasWeight = 17; double temporalBiasWeight = 17;
bool exportDebugData = 18; bool exportDebugData = 18;
bool disablePenalties = 19;
} }

View File

@@ -3,8 +3,6 @@ package nu.marginalia.index.client;
import nu.marginalia.api.searchquery.IndexProtobufCodec; import nu.marginalia.api.searchquery.IndexProtobufCodec;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint; import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery; import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
@@ -22,18 +20,6 @@ class IndexProtobufCodecTest {
verifyIsIdentityTransformation(SpecificationLimit.lessThan(1), l -> IndexProtobufCodec.convertSpecLimit(IndexProtobufCodec.convertSpecLimit(l))); verifyIsIdentityTransformation(SpecificationLimit.lessThan(1), l -> IndexProtobufCodec.convertSpecLimit(IndexProtobufCodec.convertSpecLimit(l)));
} }
@Test
public void testRankingParameters() {
verifyIsIdentityTransformation(ResultRankingParameters.sensibleDefaults(),
p -> IndexProtobufCodec.convertRankingParameterss(IndexProtobufCodec.convertRankingParameterss(p, null)));
}
@Test
public void testQueryLimits() {
verifyIsIdentityTransformation(new QueryLimits(1,2,3,4),
l -> IndexProtobufCodec.convertQueryLimits(IndexProtobufCodec.convertQueryLimits(l))
);
}
@Test @Test
public void testSubqery() { public void testSubqery() {
verifyIsIdentityTransformation(new SearchQuery( verifyIsIdentityTransformation(new SearchQuery(

View File

@@ -2,8 +2,9 @@ package nu.marginalia.functions.searchquery;
import com.google.inject.Inject; import com.google.inject.Inject;
import com.google.inject.Singleton; import com.google.inject.Singleton;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.query.*; import nu.marginalia.api.searchquery.model.query.*;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.functions.searchquery.query_parser.QueryExpansion; import nu.marginalia.functions.searchquery.query_parser.QueryExpansion;
import nu.marginalia.functions.searchquery.query_parser.QueryParser; import nu.marginalia.functions.searchquery.query_parser.QueryParser;
import nu.marginalia.functions.searchquery.query_parser.token.QueryToken; import nu.marginalia.functions.searchquery.query_parser.token.QueryToken;
@@ -36,7 +37,7 @@ public class QueryFactory {
public ProcessedQuery createQuery(QueryParams params, public ProcessedQuery createQuery(QueryParams params,
@Nullable ResultRankingParameters rankingParams) { @Nullable RpcResultRankingParameters rankingParams) {
final var query = params.humanQuery(); final var query = params.humanQuery();
if (query.length() > 1000) { if (query.length() > 1000) {
@@ -132,7 +133,9 @@ public class QueryFactory {
var limits = params.limits(); var limits = params.limits();
// Disable limits on number of results per domain if we're searching with a site:-type term // Disable limits on number of results per domain if we're searching with a site:-type term
if (domain != null) { if (domain != null) {
limits = limits.forSingleDomain(); limits = RpcQueryLimits.newBuilder(limits)
.setResultsByDomain(limits.getResultsTotal())
.build();
} }
var expansion = queryExpansion.expandQuery(queryBuilder.searchTermsInclude); var expansion = queryExpansion.expandQuery(queryBuilder.searchTermsInclude);

View File

@@ -9,7 +9,7 @@ import nu.marginalia.api.searchquery.*;
import nu.marginalia.api.searchquery.model.query.ProcessedQuery; import nu.marginalia.api.searchquery.model.query.ProcessedQuery;
import nu.marginalia.api.searchquery.model.query.QueryParams; import nu.marginalia.api.searchquery.model.query.QueryParams;
import nu.marginalia.api.searchquery.model.results.DecoratedSearchResultItem; import nu.marginalia.api.searchquery.model.results.DecoratedSearchResultItem;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.index.api.IndexClient; import nu.marginalia.index.api.IndexClient;
import nu.marginalia.service.server.DiscoverableService; import nu.marginalia.service.server.DiscoverableService;
import org.slf4j.Logger; import org.slf4j.Logger;
@@ -55,7 +55,7 @@ public class QueryGRPCService
.time(() -> { .time(() -> {
var params = QueryProtobufCodec.convertRequest(request); var params = QueryProtobufCodec.convertRequest(request);
var query = queryFactory.createQuery(params, ResultRankingParameters.sensibleDefaults()); var query = queryFactory.createQuery(params, PrototypeRankingParameters.sensibleDefaults());
var indexRequest = QueryProtobufCodec.convertQuery(request, query); var indexRequest = QueryProtobufCodec.convertQuery(request, query);
@@ -102,7 +102,7 @@ public class QueryGRPCService
String originalQuery, String originalQuery,
QueryParams params, QueryParams params,
IndexClient.Pagination pagination, IndexClient.Pagination pagination,
ResultRankingParameters rankingParameters) { RpcResultRankingParameters rankingParameters) {
var query = queryFactory.createQuery(params, rankingParameters); var query = queryFactory.createQuery(params, rankingParameters);
IndexClient.AggregateQueryResponse response = indexClient.executeQueries(QueryProtobufCodec.convertQuery(originalQuery, query), pagination); IndexClient.AggregateQueryResponse response = indexClient.executeQueries(QueryProtobufCodec.convertQuery(originalQuery, query), pagination);

View File

@@ -134,6 +134,10 @@ public class QueryExpansion {
if (scoreCombo > scoreA + scoreB || scoreCombo > 1000) { if (scoreCombo > scoreA + scoreB || scoreCombo > 1000) {
graph.addVariantForSpan(prev, qw, joinedWord); graph.addVariantForSpan(prev, qw, joinedWord);
} }
else if (StringUtils.isAlpha(prev.word()) && StringUtils.isNumeric(qw.word())) { // join e.g. trs 80 to trs80 and trs-80
graph.addVariantForSpan(prev, qw, prev.word() + qw.word());
graph.addVariantForSpan(prev, qw, prev.word() + "-" + qw.word());
}
} }
prev = qw; prev = qw;

View File

@@ -1,12 +1,12 @@
package nu.marginalia.query.svc; package nu.marginalia.query.svc;
import nu.marginalia.WmsaHome; import nu.marginalia.WmsaHome;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.RpcTemporalBias;
import nu.marginalia.api.searchquery.model.query.QueryParams; import nu.marginalia.api.searchquery.model.query.QueryParams;
import nu.marginalia.api.searchquery.model.query.SearchSpecification; import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.functions.searchquery.QueryFactory; import nu.marginalia.functions.searchquery.QueryFactory;
import nu.marginalia.functions.searchquery.query_parser.QueryExpansion; import nu.marginalia.functions.searchquery.query_parser.QueryExpansion;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.QueryStrategy; import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.index.query.limit.SpecificationLimitType; import nu.marginalia.index.query.limit.SpecificationLimitType;
@@ -49,10 +49,15 @@ public class QueryFactoryTest {
SpecificationLimit.none(), SpecificationLimit.none(),
SpecificationLimit.none(), SpecificationLimit.none(),
null, null,
new QueryLimits(100, 100, 100, 100), RpcQueryLimits.newBuilder()
.setResultsTotal(100)
.setResultsByDomain(100)
.setTimeoutMs(100)
.setFetchSize(100)
.build(),
"NONE", "NONE",
QueryStrategy.AUTO, QueryStrategy.AUTO,
ResultRankingParameters.TemporalBias.NONE, RpcTemporalBias.Bias.NONE,
0), null).specs; 0), null).specs;
} }
@@ -208,6 +213,18 @@ public class QueryFactoryTest {
System.out.println(subquery); System.out.println(subquery);
} }
@Test
public void testContractionWordNum() {
var subquery = parseAndGetSpecs("glove 80");
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" 80 "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove-80 "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove80 "));
}
@Test @Test
public void testCplusPlus() { public void testCplusPlus() {
var subquery = parseAndGetSpecs("std::vector::push_back vector"); var subquery = parseAndGetSpecs("std::vector::push_back vector");

View File

@@ -16,20 +16,19 @@ import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Comparator; import java.util.Comparator;
import java.util.Iterator;
import java.util.List; import java.util.List;
import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService; import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors; import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
import static java.lang.Math.clamp; import java.util.function.Consumer;
@Singleton @Singleton
public class IndexClient { public class IndexClient {
private static final Logger logger = LoggerFactory.getLogger(IndexClient.class); private static final Logger logger = LoggerFactory.getLogger(IndexClient.class);
private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool; private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool;
private final DomainBlacklistImpl blacklist; private final DomainBlacklistImpl blacklist;
private static final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); private static final ExecutorService executor = Executors.newCachedThreadPool();
@Inject @Inject
public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) { public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) {
@@ -51,40 +50,37 @@ public class IndexClient {
/** Execute a query on the index partitions and return the combined results. */ /** Execute a query on the index partitions and return the combined results. */
public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) { public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) {
List<CompletableFuture<Iterator<RpcDecoratedResultItem>>> futures =
channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
.async(executor)
.runEach(indexRequest);
final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal(); final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal();
final int resultsUpperBound = requestedMaxResults * channelPool.getNumNodes();
List<RpcDecoratedResultItem> results = new ArrayList<>(resultsUpperBound); AtomicInteger totalNumResults = new AtomicInteger(0);
for (var future : futures) { List<RpcDecoratedResultItem> results =
try { channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
future.get().forEachRemaining(results::add); .async(executor)
} .runEach(indexRequest)
catch (Exception e) { .stream()
logger.error("Downstream exception", e); .map(future -> future.thenApply(iterator -> {
} List<RpcDecoratedResultItem> ret = new ArrayList<>(requestedMaxResults);
} iterator.forEachRemaining(ret::add);
totalNumResults.addAndGet(ret.size());
return ret;
}))
.mapMulti((CompletableFuture<List<RpcDecoratedResultItem>> fut, Consumer<List<RpcDecoratedResultItem>> c) ->{
try {
c.accept(fut.join());
} catch (Exception e) {
logger.error("Error while fetching results", e);
}
})
.flatMap(List::stream)
.filter(item -> !isBlacklisted(item))
.sorted(comparator)
.skip(Math.max(0, (pagination.page - 1) * pagination.pageSize))
.limit(pagination.pageSize)
.toList();
// Sort the results by ranking score and remove blacklisted domains return new AggregateQueryResponse(results, pagination.page(), totalNumResults.get());
results.sort(comparator);
results.removeIf(this::isBlacklisted);
int numReceivedResults = results.size();
// pagination is typically 1-indexed, so we need to adjust the start and end indices
int indexStart = (pagination.page - 1) * pagination.pageSize;
int indexEnd = (pagination.page) * pagination.pageSize;
results = results.subList(
clamp(indexStart, 0, Math.max(0, results.size() - 1)), // from is inclusive, so subtract 1 from size()
clamp(indexEnd, 0, results.size()));
return new AggregateQueryResponse(results, pagination.page(), numReceivedResults);
} }
private boolean isBlacklisted(RpcDecoratedResultItem item) { private boolean isBlacklisted(RpcDecoratedResultItem item) {

View File

@@ -10,12 +10,12 @@ import it.unimi.dsi.fastutil.longs.LongArrayList;
import nu.marginalia.api.searchquery.IndexApiGrpc; import nu.marginalia.api.searchquery.IndexApiGrpc;
import nu.marginalia.api.searchquery.RpcDecoratedResultItem; import nu.marginalia.api.searchquery.RpcDecoratedResultItem;
import nu.marginalia.api.searchquery.RpcIndexQuery; import nu.marginalia.api.searchquery.RpcIndexQuery;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.compiled.CompiledQuery; import nu.marginalia.api.searchquery.model.compiled.CompiledQuery;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong; import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt; import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import nu.marginalia.api.searchquery.model.query.SearchSpecification; import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext; import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.array.page.LongQueryBuffer; import nu.marginalia.array.page.LongQueryBuffer;
import nu.marginalia.index.index.StatefulIndex; import nu.marginalia.index.index.StatefulIndex;
import nu.marginalia.index.model.SearchParameters; import nu.marginalia.index.model.SearchParameters;
@@ -211,7 +211,7 @@ public class IndexGrpcService
/** This class is responsible for ranking the results and adding the best results to the /** This class is responsible for ranking the results and adding the best results to the
* resultHeap, which depending on the state of the indexLookup threads may or may not block * resultHeap, which depending on the state of the indexLookup threads may or may not block
*/ */
private ResultRankingContext createRankingContext(ResultRankingParameters rankingParams, private ResultRankingContext createRankingContext(RpcResultRankingParameters rankingParams,
CompiledQuery<String> compiledQuery, CompiledQuery<String> compiledQuery,
CompiledQueryLong compiledQueryIds) CompiledQueryLong compiledQueryIds)
{ {

View File

@@ -2,12 +2,13 @@ package nu.marginalia.index.model;
import nu.marginalia.api.searchquery.IndexProtobufCodec; import nu.marginalia.api.searchquery.IndexProtobufCodec;
import nu.marginalia.api.searchquery.RpcIndexQuery; import nu.marginalia.api.searchquery.RpcIndexQuery;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.compiled.CompiledQuery; import nu.marginalia.api.searchquery.model.compiled.CompiledQuery;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong; import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryParser; import nu.marginalia.api.searchquery.model.compiled.CompiledQueryParser;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.query.SearchQuery; import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.index.query.IndexSearchBudget; import nu.marginalia.index.query.IndexSearchBudget;
import nu.marginalia.index.query.limit.QueryStrategy; import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.searchset.SearchSet; import nu.marginalia.index.searchset.SearchSet;
@@ -23,7 +24,7 @@ public class SearchParameters {
public final IndexSearchBudget budget; public final IndexSearchBudget budget;
public final SearchQuery query; public final SearchQuery query;
public final QueryParams queryParams; public final QueryParams queryParams;
public final ResultRankingParameters rankingParams; public final RpcResultRankingParameters rankingParams;
public final int limitByDomain; public final int limitByDomain;
public final int limitTotal; public final int limitTotal;
@@ -41,11 +42,11 @@ public class SearchParameters {
public SearchParameters(SearchSpecification specsSet, SearchSet searchSet) { public SearchParameters(SearchSpecification specsSet, SearchSet searchSet) {
var limits = specsSet.queryLimits; var limits = specsSet.queryLimits;
this.fetchSize = limits.fetchSize(); this.fetchSize = limits.getFetchSize();
this.budget = new IndexSearchBudget(limits.timeoutMs()); this.budget = new IndexSearchBudget(limits.getTimeoutMs());
this.query = specsSet.query; this.query = specsSet.query;
this.limitByDomain = limits.resultsByDomain(); this.limitByDomain = limits.getResultsByDomain();
this.limitTotal = limits.resultsTotal(); this.limitTotal = limits.getResultsTotal();
queryParams = new QueryParams( queryParams = new QueryParams(
specsSet.quality, specsSet.quality,
@@ -62,17 +63,17 @@ public class SearchParameters {
} }
public SearchParameters(RpcIndexQuery request, SearchSet searchSet) { public SearchParameters(RpcIndexQuery request, SearchSet searchSet) {
var limits = IndexProtobufCodec.convertQueryLimits(request.getQueryLimits()); var limits = request.getQueryLimits();
this.fetchSize = limits.fetchSize(); this.fetchSize = limits.getFetchSize();
// The time budget is halved because this is the point when we start to // The time budget is halved because this is the point when we start to
// wrap up the search and return the results. // wrap up the search and return the results.
this.budget = new IndexSearchBudget(limits.timeoutMs() / 2); this.budget = new IndexSearchBudget(limits.getTimeoutMs() / 2);
this.query = IndexProtobufCodec.convertRpcQuery(request.getQuery()); this.query = IndexProtobufCodec.convertRpcQuery(request.getQuery());
this.limitByDomain = limits.resultsByDomain(); this.limitByDomain = limits.getResultsByDomain();
this.limitTotal = limits.resultsTotal(); this.limitTotal = limits.getResultsTotal();
queryParams = new QueryParams( queryParams = new QueryParams(
convertSpecLimit(request.getQuality()), convertSpecLimit(request.getQuality()),
@@ -85,7 +86,7 @@ public class SearchParameters {
compiledQuery = CompiledQueryParser.parse(this.query.compiledQuery); compiledQuery = CompiledQueryParser.parse(this.query.compiledQuery);
compiledQueryIds = compiledQuery.mapToLong(SearchTermsUtil::getWordId); compiledQueryIds = compiledQuery.mapToLong(SearchTermsUtil::getWordId);
rankingParams = IndexProtobufCodec.convertRankingParameterss(request.getParameters()); rankingParams = request.hasParameters() ? request.getParameters() : PrototypeRankingParameters.sensibleDefaults();
} }

View File

@@ -2,7 +2,6 @@ package nu.marginalia.index.results;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt; import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import nu.marginalia.api.searchquery.model.compiled.CqExpression; import nu.marginalia.api.searchquery.model.compiled.CqExpression;
import nu.marginalia.api.searchquery.model.results.Bm25Parameters;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext; import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import java.util.BitSet; import java.util.BitSet;
@@ -24,14 +23,14 @@ public class Bm25GraphVisitor implements CqExpression.DoubleVisitor {
private final BitSet mask; private final BitSet mask;
public Bm25GraphVisitor(Bm25Parameters bm25Parameters, public Bm25GraphVisitor(double k1, double b,
float[] counts, float[] counts,
int length, int length,
ResultRankingContext ctx) { ResultRankingContext ctx) {
this.length = length; this.length = length;
this.k1 = bm25Parameters.k(); this.k1 = k1;
this.b = bm25Parameters.b(); this.b = b;
this.docCount = ctx.termFreqDocCount(); this.docCount = ctx.termFreqDocCount();
this.counts = counts; this.counts = counts;

View File

@@ -0,0 +1,119 @@
package nu.marginalia.index.results;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import gnu.trove.map.hash.TIntDoubleHashMap;
import nu.marginalia.WmsaHome;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import java.util.OptionalInt;
import java.util.concurrent.TimeUnit;
@Singleton
public class DomainRankingOverrides {
private final DbDomainQueries domainQueries;
private volatile TIntDoubleHashMap rankingFactors = new TIntDoubleHashMap(100, 0.75f, -1, 1.);
private static final Logger logger = LoggerFactory.getLogger(DomainRankingOverrides.class);
private final Path overrideFilePath;
@Inject
public DomainRankingOverrides(DbDomainQueries domainQueries) {
this.domainQueries = domainQueries;
overrideFilePath = WmsaHome.getDataPath().resolve("domain-ranking-factors.txt");
Thread.ofPlatform().start(this::updateRunner);
}
// for test access
public DomainRankingOverrides(DbDomainQueries domainQueries, Path overrideFilePath)
{
this.domainQueries = domainQueries;
this.overrideFilePath = overrideFilePath;
}
public double getRankingFactor(int domainId) {
return rankingFactors.get(domainId);
}
private void updateRunner() {
for (;;) {
reloadFile();
try {
TimeUnit.MINUTES.sleep(5);
} catch (InterruptedException ex) {
logger.warn("Thread interrupted", ex);
break;
}
}
}
void reloadFile() {
if (!Files.exists(overrideFilePath)) {
return;
}
try {
List<String> lines = Files.readAllLines(overrideFilePath);
double factor = 1.;
var newRankingFactors = new TIntDoubleHashMap(lines.size(), 0.75f, -1, 1.);
for (var line : lines) {
if (line.isBlank()) continue;
if (line.startsWith("#")) continue;
String[] parts = line.split("\\s+");
if (parts.length != 2) {
logger.warn("Unrecognized format for domain overrides file: {}", line);
continue;
}
try {
switch (parts[0]) {
case "value" -> {
// error handle me
factor = Double.parseDouble(parts[1]);
if (factor < 0) {
logger.error("Negative values are not permitted, found {}", factor);
factor = 1;
}
}
case "domain" -> {
// error handle
OptionalInt domainId = domainQueries.tryGetDomainId(new EdgeDomain(parts[1]));
if (domainId.isPresent()) {
newRankingFactors.put(domainId.getAsInt(), factor);
}
else {
logger.warn("Unrecognized domain id {}", parts[1]);
}
}
default -> {
logger.warn("Unrecognized format {}", line);
}
}
} catch (Exception ex) {
logger.warn("Error in parsing domain overrides file: {} ({})", line, ex.getClass().getSimpleName());
}
}
rankingFactors = newRankingFactors;
} catch (IOException ex) {
logger.error("Failed to read " + overrideFilePath, ex);
}
}
}

View File

@@ -40,13 +40,16 @@ public class IndexResultRankingService {
private final DocumentDbReader documentDbReader; private final DocumentDbReader documentDbReader;
private final StatefulIndex statefulIndex; private final StatefulIndex statefulIndex;
private final DomainRankingOverrides domainRankingOverrides;
@Inject @Inject
public IndexResultRankingService(DocumentDbReader documentDbReader, public IndexResultRankingService(DocumentDbReader documentDbReader,
StatefulIndex statefulIndex) StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides)
{ {
this.documentDbReader = documentDbReader; this.documentDbReader = documentDbReader;
this.statefulIndex = statefulIndex; this.statefulIndex = statefulIndex;
this.domainRankingOverrides = domainRankingOverrides;
} }
public List<SearchResultItem> rankResults(SearchParameters params, public List<SearchResultItem> rankResults(SearchParameters params,
@@ -57,7 +60,7 @@ public class IndexResultRankingService {
if (resultIds.isEmpty()) if (resultIds.isEmpty())
return List.of(); return List.of();
IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, rankingContext, params); IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, domainRankingOverrides, rankingContext, params);
List<SearchResultItem> results = new ArrayList<>(resultIds.size()); List<SearchResultItem> results = new ArrayList<>(resultIds.size());
@@ -156,7 +159,7 @@ public class IndexResultRankingService {
// for the selected results, as this would be comically expensive to do for all the results we // for the selected results, as this would be comically expensive to do for all the results we
// discard along the way // discard along the way
if (params.rankingParams.exportDebugData) { if (params.rankingParams.getExportDebugData()) {
var combinedIdsList = new LongArrayList(resultsList.size()); var combinedIdsList = new LongArrayList(resultsList.size());
for (var item : resultsList) { for (var item : resultsList) {
combinedIdsList.add(item.combinedId); combinedIdsList.add(item.combinedId);

View File

@@ -2,10 +2,11 @@ package nu.marginalia.index.results;
import it.unimi.dsi.fastutil.ints.IntIterator; import it.unimi.dsi.fastutil.ints.IntIterator;
import it.unimi.dsi.fastutil.ints.IntList; import it.unimi.dsi.fastutil.ints.IntList;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.RpcTemporalBias;
import nu.marginalia.api.searchquery.model.compiled.CompiledQuery; import nu.marginalia.api.searchquery.model.compiled.CompiledQuery;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong; import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext; import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.results.SearchResultItem; import nu.marginalia.api.searchquery.model.results.SearchResultItem;
import nu.marginalia.api.searchquery.model.results.debug.DebugRankingFactors; import nu.marginalia.api.searchquery.model.results.debug.DebugRankingFactors;
import nu.marginalia.index.forward.spans.DocumentSpans; import nu.marginalia.index.forward.spans.DocumentSpans;
@@ -40,14 +41,17 @@ public class IndexResultScoreCalculator {
private final CombinedIndexReader index; private final CombinedIndexReader index;
private final QueryParams queryParams; private final QueryParams queryParams;
private final DomainRankingOverrides domainRankingOverrides;
private final ResultRankingContext rankingContext; private final ResultRankingContext rankingContext;
private final CompiledQuery<String> compiledQuery; private final CompiledQuery<String> compiledQuery;
public IndexResultScoreCalculator(StatefulIndex statefulIndex, public IndexResultScoreCalculator(StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides,
ResultRankingContext rankingContext, ResultRankingContext rankingContext,
SearchParameters params) SearchParameters params)
{ {
this.index = statefulIndex.get(); this.index = statefulIndex.get();
this.domainRankingOverrides = domainRankingOverrides;
this.rankingContext = rankingContext; this.rankingContext = rankingContext;
this.queryParams = params.queryParams; this.queryParams = params.queryParams;
@@ -116,20 +120,20 @@ public class IndexResultScoreCalculator {
float proximitiyFac = getProximitiyFac(decodedPositions, searchTerms.phraseConstraints, verbatimMatches, unorderedMatches, spans); float proximitiyFac = getProximitiyFac(decodedPositions, searchTerms.phraseConstraints, verbatimMatches, unorderedMatches, spans);
double score_firstPosition = params.tcfFirstPosition * (1.0 / Math.sqrt(unorderedMatches.firstPosition)); double score_firstPosition = params.getTcfFirstPositionWeight() * (1.0 / Math.sqrt(unorderedMatches.firstPosition));
double score_verbatim = params.tcfVerbatim * verbatimMatches.getScore(); double score_verbatim = params.getTcfVerbatimWeight() * verbatimMatches.getScore();
double score_proximity = params.tcfProximity * proximitiyFac; double score_proximity = params.getTcfProximityWeight() * proximitiyFac;
double score_bM25 = params.bm25Weight double score_bM25 = params.getBm25Weight()
* wordFlagsQuery.root.visit(new Bm25GraphVisitor(params.bm25Params, unorderedMatches.getWeightedCounts(), docSize, rankingContext)) * wordFlagsQuery.root.visit(new Bm25GraphVisitor(params.getBm25K(), params.getBm25B(), unorderedMatches.getWeightedCounts(), docSize, rankingContext))
/ (Math.sqrt(unorderedMatches.searchableKeywordCount + 1)); / (Math.sqrt(unorderedMatches.searchableKeywordCount + 1));
double score_bFlags = params.bm25Weight double score_bFlags = params.getBm25Weight()
* wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.bm25Params, wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext)) * wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.getBm25K(), wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext))
/ (Math.sqrt(unorderedMatches.searchableKeywordCount + 1)); / (Math.sqrt(unorderedMatches.searchableKeywordCount + 1));
double rankingAdjustment = domainRankingOverrides.getRankingFactor(UrlIdCodec.getDomainId(combinedId));
double score = normalize( double score = normalize(
score_firstPosition + score_proximity + score_verbatim rankingAdjustment * (score_firstPosition + score_proximity + score_verbatim + score_bM25 + score_bFlags),
+ score_bM25
+ score_bFlags,
-Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0 -Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0
); );
@@ -245,9 +249,13 @@ public class IndexResultScoreCalculator {
private double calculateDocumentBonus(long documentMetadata, private double calculateDocumentBonus(long documentMetadata,
int features, int features,
int length, int length,
ResultRankingParameters rankingParams, RpcResultRankingParameters rankingParams,
@Nullable DebugRankingFactors debugRankingFactors) { @Nullable DebugRankingFactors debugRankingFactors) {
if (rankingParams.getDisablePenalties()) {
return 0.;
}
int rank = DocumentMetadata.decodeRank(documentMetadata); int rank = DocumentMetadata.decodeRank(documentMetadata);
int asl = DocumentMetadata.decodeAvgSentenceLength(documentMetadata); int asl = DocumentMetadata.decodeAvgSentenceLength(documentMetadata);
int quality = DocumentMetadata.decodeQuality(documentMetadata); int quality = DocumentMetadata.decodeQuality(documentMetadata);
@@ -256,18 +264,18 @@ public class IndexResultScoreCalculator {
int topology = DocumentMetadata.decodeTopology(documentMetadata); int topology = DocumentMetadata.decodeTopology(documentMetadata);
int year = DocumentMetadata.decodeYear(documentMetadata); int year = DocumentMetadata.decodeYear(documentMetadata);
double averageSentenceLengthPenalty = (asl >= rankingParams.shortSentenceThreshold ? 0 : -rankingParams.shortSentencePenalty); double averageSentenceLengthPenalty = (asl >= rankingParams.getShortSentenceThreshold() ? 0 : -rankingParams.getShortSentencePenalty());
final double qualityPenalty = calculateQualityPenalty(size, quality, rankingParams); final double qualityPenalty = calculateQualityPenalty(size, quality, rankingParams);
final double rankingBonus = (255. - rank) * rankingParams.domainRankBonus; final double rankingBonus = (255. - rank) * rankingParams.getDomainRankBonus();
final double topologyBonus = Math.log(1 + topology); final double topologyBonus = Math.log(1 + topology);
final double documentLengthPenalty = length > rankingParams.shortDocumentThreshold ? 0 : -rankingParams.shortDocumentPenalty; final double documentLengthPenalty = length > rankingParams.getShortDocumentThreshold() ? 0 : -rankingParams.getShortDocumentPenalty();
final double temporalBias; final double temporalBias;
if (rankingParams.temporalBias == ResultRankingParameters.TemporalBias.RECENT) { if (rankingParams.getTemporalBias().getBias() == RpcTemporalBias.Bias.RECENT) {
temporalBias = - Math.abs(year - PubDate.MAX_YEAR) * rankingParams.temporalBiasWeight; temporalBias = - Math.abs(year - PubDate.MAX_YEAR) * rankingParams.getTemporalBiasWeight();
} else if (rankingParams.temporalBias == ResultRankingParameters.TemporalBias.OLD) { } else if (rankingParams.getTemporalBias().getBias() == RpcTemporalBias.Bias.OLD) {
temporalBias = - Math.abs(year - PubDate.MIN_YEAR) * rankingParams.temporalBiasWeight; temporalBias = - Math.abs(year - PubDate.MIN_YEAR) * rankingParams.getTemporalBiasWeight();
} else { } else {
temporalBias = 0; temporalBias = 0;
} }
@@ -506,14 +514,14 @@ public class IndexResultScoreCalculator {
} }
private double calculateQualityPenalty(int size, int quality, ResultRankingParameters rankingParams) { private double calculateQualityPenalty(int size, int quality, RpcResultRankingParameters rankingParams) {
if (size < 400) { if (size < 400) {
if (quality < 5) if (quality < 5)
return 0; return 0;
return -quality * rankingParams.qualityPenalty; return -quality * rankingParams.getQualityPenalty();
} }
else { else {
return -quality * rankingParams.qualityPenalty * 20; return -quality * rankingParams.getQualityPenalty() * 20;
} }
} }
@@ -575,3 +583,4 @@ public class IndexResultScoreCalculator {
} }
} }

View File

@@ -3,7 +3,6 @@ package nu.marginalia.index.results;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt; import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import nu.marginalia.api.searchquery.model.compiled.CqDataLong; import nu.marginalia.api.searchquery.model.compiled.CqDataLong;
import nu.marginalia.api.searchquery.model.compiled.CqExpression; import nu.marginalia.api.searchquery.model.compiled.CqExpression;
import nu.marginalia.api.searchquery.model.results.Bm25Parameters;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext; import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import nu.marginalia.model.idx.WordFlags; import nu.marginalia.model.idx.WordFlags;
@@ -15,15 +14,14 @@ public class TermFlagsGraphVisitor implements CqExpression.DoubleVisitor {
private final CqDataLong wordMetaData; private final CqDataLong wordMetaData;
private final CqDataInt frequencies; private final CqDataInt frequencies;
private final float[] counts; private final float[] counts;
private final Bm25Parameters bm25Parameters; private final double k1;
private final int docCount; private final int docCount;
public TermFlagsGraphVisitor(Bm25Parameters bm25Parameters, public TermFlagsGraphVisitor(double k1,
CqDataLong wordMetaData, CqDataLong wordMetaData,
float[] counts, float[] counts,
ResultRankingContext ctx) { ResultRankingContext ctx) {
this.bm25Parameters = bm25Parameters; this.k1 = k1;
this.counts = counts; this.counts = counts;
this.docCount = ctx.termFreqDocCount(); this.docCount = ctx.termFreqDocCount();
this.wordMetaData = wordMetaData; this.wordMetaData = wordMetaData;
@@ -55,7 +53,7 @@ public class TermFlagsGraphVisitor implements CqExpression.DoubleVisitor {
int freq = frequencies.get(idx); int freq = frequencies.get(idx);
// note we override b to zero for priority terms as they are independent of document length // note we override b to zero for priority terms as they are independent of document length
return invFreq(docCount, freq) * f(bm25Parameters.k(), 0, count, 0); return invFreq(docCount, freq) * f(k1, 0, count, 0);
} }
private double evaluatePriorityScore(int idx) { private double evaluatePriorityScore(int idx) {

View File

@@ -1,7 +0,0 @@
package nu.marginalia.index.query.limit;
public record QueryLimits(int resultsByDomain, int resultsTotal, int timeoutMs, int fetchSize) {
public QueryLimits forSingleDomain() {
return new QueryLimits(resultsTotal, resultsTotal, timeoutMs, fetchSize);
}
}

View File

@@ -4,10 +4,11 @@ import com.google.inject.Guice;
import com.google.inject.Inject; import com.google.inject.Inject;
import nu.marginalia.IndexLocations; import nu.marginalia.IndexLocations;
import nu.marginalia.api.searchquery.RpcDecoratedResultItem; import nu.marginalia.api.searchquery.RpcDecoratedResultItem;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint; import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery; import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.query.SearchSpecification; import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.index.construction.DocIdRewriter; import nu.marginalia.index.construction.DocIdRewriter;
import nu.marginalia.index.construction.full.FullIndexConstructor; import nu.marginalia.index.construction.full.FullIndexConstructor;
import nu.marginalia.index.construction.prio.PrioIndexConstructor; import nu.marginalia.index.construction.prio.PrioIndexConstructor;
@@ -17,7 +18,6 @@ import nu.marginalia.index.forward.construction.ForwardIndexConverter;
import nu.marginalia.index.index.StatefulIndex; import nu.marginalia.index.index.StatefulIndex;
import nu.marginalia.index.journal.IndexJournal; import nu.marginalia.index.journal.IndexJournal;
import nu.marginalia.index.journal.IndexJournalSlopWriter; import nu.marginalia.index.journal.IndexJournalSlopWriter;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.QueryStrategy; import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.linkdb.docs.DocumentDbReader; import nu.marginalia.linkdb.docs.DocumentDbReader;
@@ -115,9 +115,16 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery( var rsp = queryService.justQuery(
SearchSpecification.builder() SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000)) .queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.queryStrategy(QueryStrategy.SENTENCE) .queryStrategy(QueryStrategy.SENTENCE)
.rankingParams(ResultRankingParameters.sensibleDefaults()) .rankingParams(PrototypeRankingParameters.sensibleDefaults())
.domains(new ArrayList<>()) .domains(new ArrayList<>())
.searchSetIdentifier("NONE") .searchSetIdentifier("NONE")
.query( .query(
@@ -171,9 +178,16 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery( var rsp = queryService.justQuery(
SearchSpecification.builder() SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000)) .queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.queryStrategy(QueryStrategy.SENTENCE) .queryStrategy(QueryStrategy.SENTENCE)
.rankingParams(ResultRankingParameters.sensibleDefaults()) .rankingParams(PrototypeRankingParameters.sensibleDefaults())
.domains(new ArrayList<>()) .domains(new ArrayList<>())
.searchSetIdentifier("NONE") .searchSetIdentifier("NONE")
.query( .query(
@@ -225,8 +239,15 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery( var rsp = queryService.justQuery(
SearchSpecification.builder() SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000)) .queryLimits(
.rankingParams(ResultRankingParameters.sensibleDefaults()) RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.rankingParams(PrototypeRankingParameters.sensibleDefaults())
.queryStrategy(QueryStrategy.SENTENCE) .queryStrategy(QueryStrategy.SENTENCE)
.domains(List.of(2)) .domains(List.of(2))
.query( .query(
@@ -282,11 +303,18 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery( var rsp = queryService.justQuery(
SearchSpecification.builder() SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000)) .queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.year(SpecificationLimit.equals(1998)) .year(SpecificationLimit.equals(1998))
.queryStrategy(QueryStrategy.SENTENCE) .queryStrategy(QueryStrategy.SENTENCE)
.searchSetIdentifier("NONE") .searchSetIdentifier("NONE")
.rankingParams(ResultRankingParameters.sensibleDefaults()) .rankingParams(PrototypeRankingParameters.sensibleDefaults())
.query( .query(
SearchQuery.builder() SearchQuery.builder()
.compiledQuery("4") .compiledQuery("4")

View File

@@ -4,10 +4,11 @@ import com.google.inject.Guice;
import com.google.inject.Inject; import com.google.inject.Inject;
import it.unimi.dsi.fastutil.ints.IntList; import it.unimi.dsi.fastutil.ints.IntList;
import nu.marginalia.IndexLocations; import nu.marginalia.IndexLocations;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint; import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery; import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.query.SearchSpecification; import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters; import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.hash.MurmurHash3_128; import nu.marginalia.hash.MurmurHash3_128;
import nu.marginalia.index.construction.DocIdRewriter; import nu.marginalia.index.construction.DocIdRewriter;
import nu.marginalia.index.construction.full.FullIndexConstructor; import nu.marginalia.index.construction.full.FullIndexConstructor;
@@ -18,7 +19,6 @@ import nu.marginalia.index.forward.construction.ForwardIndexConverter;
import nu.marginalia.index.index.StatefulIndex; import nu.marginalia.index.index.StatefulIndex;
import nu.marginalia.index.journal.IndexJournal; import nu.marginalia.index.journal.IndexJournal;
import nu.marginalia.index.journal.IndexJournalSlopWriter; import nu.marginalia.index.journal.IndexJournalSlopWriter;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.QueryStrategy; import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit; import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.linkdb.docs.DocumentDbReader; import nu.marginalia.linkdb.docs.DocumentDbReader;
@@ -389,13 +389,20 @@ public class IndexQueryServiceIntegrationTest {
SearchSpecification basicQuery(Function<SearchSpecification.SearchSpecificationBuilder, SearchSpecification.SearchSpecificationBuilder> mutator) SearchSpecification basicQuery(Function<SearchSpecification.SearchSpecificationBuilder, SearchSpecification.SearchSpecificationBuilder> mutator)
{ {
var builder = SearchSpecification.builder() var builder = SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000)) .queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.queryStrategy(QueryStrategy.SENTENCE) .queryStrategy(QueryStrategy.SENTENCE)
.year(SpecificationLimit.none()) .year(SpecificationLimit.none())
.quality(SpecificationLimit.none()) .quality(SpecificationLimit.none())
.size(SpecificationLimit.none()) .size(SpecificationLimit.none())
.rank(SpecificationLimit.none()) .rank(SpecificationLimit.none())
.rankingParams(ResultRankingParameters.sensibleDefaults()) .rankingParams(PrototypeRankingParameters.sensibleDefaults())
.domains(new ArrayList<>()) .domains(new ArrayList<>())
.searchSetIdentifier("NONE"); .searchSetIdentifier("NONE");

View File

@@ -0,0 +1,103 @@
package nu.marginalia.index.results;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.test.TestMigrationLoader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.parallel.Execution;
import org.junit.jupiter.api.parallel.ExecutionMode;
import org.testcontainers.containers.MariaDBContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.sql.SQLException;
@Testcontainers
@Execution(ExecutionMode.SAME_THREAD)
@Tag("slow")
class DomainRankingOverridesTest {
@Container
static MariaDBContainer<?> mariaDBContainer = new MariaDBContainer<>("mariadb")
.withDatabaseName("WMSA_prod")
.withUsername("wmsa")
.withPassword("wmsa")
.withNetworkAliases("mariadb");
private static DbDomainQueries domainQueries;
@BeforeAll
public static void setup() throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(mariaDBContainer.getJdbcUrl());
config.setUsername("wmsa");
config.setPassword("wmsa");
var dataSource = new HikariDataSource(config);
TestMigrationLoader.flywayMigration(dataSource);
try (var conn = dataSource.getConnection();
var stmt = conn.createStatement()) {
stmt.executeQuery("DELETE FROM EC_DOMAIN"); // Wipe any old state from other test runs
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('first.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('second.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('third.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('not-added.example.com', 'example.com', 1)");
}
domainQueries = new DbDomainQueries(dataSource);
}
@Test
public void test() throws IOException {
Path overridesFile = Files.createTempFile(getClass().getSimpleName(), ".txt");
try {
Files.writeString(overridesFile, """
# A comment
value 0.75
domain first.example.com
domain second.example.com
value 1.1
domain third.example.com
""",
StandardOpenOption.APPEND);
var overrides = new DomainRankingOverrides(domainQueries, overridesFile);
overrides.reloadFile();
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("first.example.com"))
));
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("second.example.com"))
));
Assertions.assertEquals(1.1, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("third.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("not-added.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(1<<23));
}
finally {
Files.deleteIfExists(overridesFile);
}
}
}

View File

@@ -23,16 +23,33 @@ public class SimpleBlockingThreadPool {
private final Logger logger = LoggerFactory.getLogger(SimpleBlockingThreadPool.class); private final Logger logger = LoggerFactory.getLogger(SimpleBlockingThreadPool.class);
public SimpleBlockingThreadPool(String name, int poolSize, int queueSize) { public SimpleBlockingThreadPool(String name, int poolSize, int queueSize) {
this(name, poolSize, queueSize, ThreadType.PLATFORM);
}
public SimpleBlockingThreadPool(String name, int poolSize, int queueSize, ThreadType threadType) {
tasks = new ArrayBlockingQueue<>(queueSize); tasks = new ArrayBlockingQueue<>(queueSize);
for (int i = 0; i < poolSize; i++) { for (int i = 0; i < poolSize; i++) {
Thread worker = new Thread(this::worker, name + "[" + i + "]");
worker.setDaemon(true); Thread.Builder threadBuilder = switch (threadType) {
worker.start(); case VIRTUAL -> Thread.ofVirtual();
case PLATFORM -> Thread.ofPlatform().daemon(true);
};
Thread worker = threadBuilder
.name(name + "[" + i + "]")
.start(this::worker);
workers.add(worker); workers.add(worker);
} }
} }
public enum ThreadType {
VIRTUAL,
PLATFORM
}
public void submit(Task task) throws InterruptedException { public void submit(Task task) throws InterruptedException {
tasks.put(task); tasks.put(task);
} }

View File

@@ -45,6 +45,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
dataColumn.openUnregistered(uri, page), dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
dataReader.skip(toSkip); dataReader.skip(toSkip);
} }
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override @Override
public boolean hasRemaining() throws IOException { public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining(); return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
Storage.reader(uri, this, page, false), Storage.reader(uri, this, page, false),
@@ -96,6 +101,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
this.indexReader = indexReader; this.indexReader = indexReader;
} }
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override @Override
public AbstractColumn<?, ?> columnDesc() { public AbstractColumn<?, ?> columnDesc() {
return GammaCodedSequenceColumn.this; return GammaCodedSequenceColumn.this;

View File

@@ -45,6 +45,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
); );
} }
@Override
public int alignmentSize() {
return 0;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
dataColumn.openUnregistered(uri, page), dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
dataReader.skip(toSkip); dataReader.skip(toSkip);
} }
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override @Override
public boolean hasRemaining() throws IOException { public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining(); return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
Storage.reader(uri, this, page, false), Storage.reader(uri, this, page, false),
@@ -101,6 +106,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
return VarintCodedSequenceColumn.this; return VarintCodedSequenceColumn.this;
} }
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override @Override
public void skip(long positions) throws IOException { public void skip(long positions) throws IOException {
for (int i = 0; i < positions; i++) { for (int i = 0; i < positions; i++) {

View File

@@ -155,8 +155,15 @@ public class SentenceExtractor {
public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) { public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) {
String[] sentences; String[] sentences;
// Normalize spaces // Safety net against malformed data DOS attacks,
// found 5+ MB <p>-tags in the wild that just break
// the sentence extractor causing it to stall forever.
if (text.length() > 50_000) {
// 50k chars can hold a small novel, let alone single html tags
text = text.substring(0, 50_000);
}
// Normalize spaces
text = normalizeSpaces(text); text = normalizeSpaces(text);
// Split into sentences // Split into sentences

View File

@@ -5,9 +5,7 @@ import nu.marginalia.actor.state.*;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.util.*;
import java.util.Arrays;
import java.util.List;
public abstract class RecordActorPrototype implements ActorPrototype { public abstract class RecordActorPrototype implements ActorPrototype {
@@ -118,7 +116,7 @@ public abstract class RecordActorPrototype implements ActorPrototype {
} }
private String functionName(Class<? extends ActorStep> functionClass) { private String functionName(Class<? extends ActorStep> functionClass) {
return functionClass.getSimpleName().toUpperCase(); return ActorStep.functionName(functionClass);
} }
private ActorStep constructState(String message) throws ReflectiveOperationException { private ActorStep constructState(String message) throws ReflectiveOperationException {
@@ -145,4 +143,43 @@ public abstract class RecordActorPrototype implements ActorPrototype {
} }
} }
/** Get a list of JSON prototypes for each actor step declared by this actor */
@SuppressWarnings("unchecked")
public Map<String, String> getMessagePrototypes() {
Map<String, String> messagePrototypes = new HashMap<>();
for (var clazz : getClass().getDeclaredClasses()) {
if (!clazz.isRecord() || !ActorStep.class.isAssignableFrom(clazz))
continue;
StringJoiner sj = new StringJoiner(",\n\t", "{\n\t", "\n}");
renderToJsonPrototype(sj, (Class<? extends Record>) clazz);
messagePrototypes.put(ActorStep.functionName((Class<? extends ActorStep>) clazz), sj.toString());
}
return messagePrototypes;
}
@SuppressWarnings("unchecked")
private void renderToJsonPrototype(StringJoiner sj, Class<? extends Record> recordType) {
for (var field : recordType.getDeclaredFields()) {
String typeName = field.getType().getSimpleName();
if ("List".equals(typeName)) {
sj.add(String.format("\"%s\": [ ]", field.getName()));
}
else if (field.getType().isRecord()) {
var innerSj = new StringJoiner(",", "{", "}");
renderToJsonPrototype(innerSj, (Class<? extends Record>) field.getType());
sj.add(String.format("\"%s\": %s", field.getName(), sj));
}
else {
sj.add(String.format("\"%s\": \"%s\"", field.getName(), typeName));
}
}
}
} }

View File

@@ -1,3 +1,7 @@
package nu.marginalia.actor.state; package nu.marginalia.actor.state;
public interface ActorStep {} public interface ActorStep {
static String functionName(Class<? extends ActorStep> type) {
return type.getSimpleName().toUpperCase();
}
}

View File

@@ -87,6 +87,8 @@ dependencies {
implementation libs.commons.compress implementation libs.commons.compress
implementation libs.sqlite implementation libs.sqlite
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit testImplementation libs.bundles.junit
testImplementation libs.mockito testImplementation libs.mockito

View File

@@ -12,7 +12,6 @@ import nu.marginalia.converting.sideload.SideloadSourceFactory;
import nu.marginalia.converting.writer.ConverterBatchWritableIf; import nu.marginalia.converting.writer.ConverterBatchWritableIf;
import nu.marginalia.converting.writer.ConverterBatchWriter; import nu.marginalia.converting.writer.ConverterBatchWriter;
import nu.marginalia.converting.writer.ConverterWriter; import nu.marginalia.converting.writer.ConverterWriter;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.mq.MessageQueueFactory; import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.mqapi.converting.ConvertRequest; import nu.marginalia.mqapi.converting.ConvertRequest;
@@ -36,6 +35,7 @@ import java.io.IOException;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.sql.SQLException; import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collection; import java.util.Collection;
import java.util.List; import java.util.List;
import java.util.Optional; import java.util.Optional;
@@ -51,6 +51,7 @@ public class ConverterMain extends ProcessMainClass {
private final ProcessHeartbeat heartbeat; private final ProcessHeartbeat heartbeat;
private final FileStorageService fileStorageService; private final FileStorageService fileStorageService;
private final SideloadSourceFactory sideloadSourceFactory; private final SideloadSourceFactory sideloadSourceFactory;
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
public static void main(String... args) throws Exception { public static void main(String... args) throws Exception {
@@ -201,12 +202,26 @@ public class ConverterMain extends ProcessMainClass {
processedDomains.set(batchingWorkLog.size()); processedDomains.set(batchingWorkLog.size());
heartbeat.setProgress(processedDomains.get() / (double) totalDomains); heartbeat.setProgress(processedDomains.get() / (double) totalDomains);
for (var domain : WorkLog.iterableMap(crawlDir.getLogFile(), logger.info("Processing small items");
// We separate the large and small domains to reduce the number of critical sections,
// as the large domains have a separate processing track that doesn't store everything
// in memory
final List<Path> bigTasks = new ArrayList<>();
// First process the small items
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog))) new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{ {
if (SerializableCrawlDataStream.getSizeHint(dataPath) >= SIDELOAD_THRESHOLD) {
bigTasks.add(dataPath);
continue;
}
pool.submit(() -> { pool.submit(() -> {
try { try (var dataStream = SerializableCrawlDataStream.openDataStream(dataPath)) {
ConverterBatchWritableIf writable = processor.createWritable(domain); ConverterBatchWritableIf writable = processor.fullProcessing(dataStream) ;
converterWriter.accept(writable); converterWriter.accept(writable);
} }
catch (Exception ex) { catch (Exception ex) {
@@ -225,10 +240,39 @@ public class ConverterMain extends ProcessMainClass {
do { do {
System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining"); System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining");
} while (!pool.awaitTermination(60, TimeUnit.SECONDS)); } while (!pool.awaitTermination(60, TimeUnit.SECONDS));
logger.info("Processing large items");
try (var hb = heartbeat.createAdHocTaskHeartbeat("Large Domains")) {
int bigTaskIdx = 0;
// Next the big items domain-by-domain
for (var dataPath : bigTasks) {
hb.progress(dataPath.toFile().getName(), bigTaskIdx++, bigTasks.size());
try {
// SerializableCrawlDataStream is autocloseable, we can't try-with-resources because then it will be
// closed before it's consumed by the converterWriter. Instead, the converterWriter guarantees it
// will close it after it's consumed.
var stream = SerializableCrawlDataStream.openDataStream(dataPath);
ConverterBatchWritableIf writable = processor.simpleProcessing(stream, SerializableCrawlDataStream.getSizeHint(dataPath));
converterWriter.accept(writable);
}
catch (Exception ex) {
logger.info("Error in processing", ex);
}
finally {
heartbeat.setProgress(processedDomains.incrementAndGet() / (double) totalDomains);
}
}
}
logger.info("Processing complete");
} }
} }
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<SerializableCrawlDataStream>> { private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Path>> {
private final Path crawlRootDir; private final Path crawlRootDir;
private final BatchingWorkLog batchingWorkLog; private final BatchingWorkLog batchingWorkLog;
@@ -239,7 +283,7 @@ public class ConverterMain extends ProcessMainClass {
} }
@Override @Override
public Optional<SerializableCrawlDataStream> apply(WorkLogEntry entry) { public Optional<Path> apply(WorkLogEntry entry) {
if (batchingWorkLog.isItemProcessed(entry.id())) { if (batchingWorkLog.isItemProcessed(entry.id())) {
return Optional.empty(); return Optional.empty();
} }
@@ -252,7 +296,7 @@ public class ConverterMain extends ProcessMainClass {
} }
try { try {
return Optional.of(CrawledDomainReader.createDataStream(path)); return Optional.of(path);
} }
catch (Exception ex) { catch (Exception ex) {
return Optional.empty(); return Optional.empty();

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.WordFlags;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
@@ -91,7 +92,7 @@ public class DocumentProcessor {
DocumentClass documentClass, DocumentClass documentClass,
DocumentDecorator documentDecorator, DocumentDecorator documentDecorator,
DomainLinks externalDomainLinks, DomainLinks externalDomainLinks,
ProcessedDocument ret) throws URISyntaxException, DisqualifiedException ProcessedDocument ret) throws URISyntaxException, IOException, DisqualifiedException
{ {
var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus); var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus);
@@ -109,7 +110,7 @@ public class DocumentProcessor {
ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus); ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus);
final var plugin = findPlugin(crawledDocument); AbstractDocumentProcessorPlugin plugin = findPlugin(crawledDocument);
EdgeUrl url = new EdgeUrl(crawledDocument.url); EdgeUrl url = new EdgeUrl(crawledDocument.url);
LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url); LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url);

View File

@@ -32,7 +32,6 @@ import java.util.*;
import java.util.regex.Pattern; import java.util.regex.Pattern;
public class DomainProcessor { public class DomainProcessor {
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
private final DocumentProcessor documentProcessor; private final DocumentProcessor documentProcessor;
private final SiteWords siteWords; private final SiteWords siteWords;
private final AnchorTagsSource anchorTagsSource; private final AnchorTagsSource anchorTagsSource;
@@ -54,21 +53,9 @@ public class DomainProcessor {
geoIpDictionary.waitReady(); geoIpDictionary.waitReady();
} }
public ConverterBatchWritableIf createWritable(SerializableCrawlDataStream domain) { public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
final int sizeHint = domain.sizeHint();
if (sizeHint > SIDELOAD_THRESHOLD) {
// If the file is too big, we run a processing mode that doesn't
// require loading the entire dataset into RAM
return sideloadProcessing(domain, sizeHint);
}
return fullProcessing(domain);
}
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
try { try {
return new SideloadProcessing(dataStream, sizeHint, extraKeywords); return new SimpleProcessing(dataStream, sizeHint, extraKeywords);
} }
catch (Exception ex) { catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex); logger.warn("Failed to process domain sideload", ex);
@@ -76,9 +63,9 @@ public class DomainProcessor {
} }
} }
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) { public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) {
try { try {
return new SideloadProcessing(dataStream, sizeHint); return new SimpleProcessing(dataStream, sizeHint);
} }
catch (Exception ex) { catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex); logger.warn("Failed to process domain sideload", ex);
@@ -86,22 +73,84 @@ public class DomainProcessor {
} }
} }
public class SideloadProcessing implements ConverterBatchWritableIf, SideloadSource { @Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBodyBytes.length == 0)
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
/** The simple processing track processes documents individually, and does not perform any domain-level analysis.
* This is needed to process extremely large domains, which would otherwise eat up too much RAM.
*/
public class SimpleProcessing implements ConverterBatchWritableIf, SideloadSource {
private final SerializableCrawlDataStream dataStream; private final SerializableCrawlDataStream dataStream;
private final ProcessedDomain domain; private final ProcessedDomain domain;
private final DocumentDecorator documentDecorator; private final DocumentDecorator documentDecorator;
private final Set<String> processedUrls = new HashSet<>(); private final Set<String> processedUrls = new HashSet<>();
private final DomainLinks externalDomainLinks; private final DomainLinks externalDomainLinks;
private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator(); private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator();
private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8, private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8,
Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors()) Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors())
); );
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException { SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException {
this(dataStream, sizeHint, List.of()); this(dataStream, sizeHint, List.of());
} }
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException { SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException {
this.dataStream = dataStream; this.dataStream = dataStream;
if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain)) if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain))
@@ -128,6 +177,7 @@ public class DomainProcessor {
@Override @Override
public Iterator<ProcessedDocument> getDocumentsStream() { public Iterator<ProcessedDocument> getDocumentsStream() {
return iteratorFactory.create((taskConsumer) -> { return iteratorFactory.create((taskConsumer) -> {
while (dataStream.hasNext()) while (dataStream.hasNext())
{ {
if (!(dataStream.next() instanceof CrawledDocument doc)) if (!(dataStream.next() instanceof CrawledDocument doc))
@@ -172,65 +222,6 @@ public class DomainProcessor {
} }
} }
@Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBody.isBlank())
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
private void processDomain(CrawledDomain crawledDomain, private void processDomain(CrawledDomain crawledDomain,
ProcessedDomain domain, ProcessedDomain domain,
DocumentDecorator decorator) DocumentDecorator decorator)

View File

@@ -116,7 +116,7 @@ public class AdblockSimulator {
// Refrain from cleaning up this code, it's very hot code and needs to be fast. // Refrain from cleaning up this code, it's very hot code and needs to be fast.
// This version is about 100x faster than the a "clean" first stab implementation. // This version is about 100x faster than a "clean" first stab implementation.
class RuleVisitor implements NodeFilter { class RuleVisitor implements NodeFilter {
public boolean sawAds; public boolean sawAds;

View File

@@ -23,7 +23,7 @@ public class DocumentGeneratorExtractor {
var tags = doc.select("meta[name=generator]"); var tags = doc.select("meta[name=generator]");
if (tags.size() == 0) { if (tags.isEmpty()) {
// Some sites have a comment in the head instead of a meta tag // Some sites have a comment in the head instead of a meta tag
return fingerprintServerTech(doc, responseHeaders); return fingerprintServerTech(doc, responseHeaders);
} }

View File

@@ -24,7 +24,7 @@ public class DocumentValuator {
double scriptPenalty = getScriptPenalty(parsedDocument); double scriptPenalty = getScriptPenalty(parsedDocument);
double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument); double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument);
int rawLength = crawledDocument.documentBody.length(); int rawLength = crawledDocument.documentBodyBytes.length;
if (textLength == 0) { if (textLength == 0) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH); throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH);

View File

@@ -218,7 +218,10 @@ public class FeatureExtractor {
} }
} }
if (features.contains(HtmlFeature.JS) && adblockSimulator.hasAds(doc.clone())) { if (features.contains(HtmlFeature.JS)
// remove while disabled to get rid of expensive clone() call:
// adblockSimulator.hasAds(doc.clone())
) {
features.add(HtmlFeature.ADVERTISEMENT); features.add(HtmlFeature.ADVERTISEMENT);
} }

View File

@@ -14,6 +14,7 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard; import nu.marginalia.model.html.HtmlStandard;
import javax.annotation.Nullable; import javax.annotation.Nullable;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
@@ -25,7 +26,7 @@ public abstract class AbstractDocumentProcessorPlugin {
this.languageFilter = languageFilter; this.languageFilter = languageFilter;
} }
public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException; public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException, IOException;
public abstract boolean isApplicable(CrawledDocument doc); public abstract boolean isApplicable(CrawledDocument doc);
protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException { protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException {
@@ -86,6 +87,7 @@ public abstract class AbstractDocumentProcessorPlugin {
return this; return this;
} }
public MetaTagsBuilder addPubDate(PubDate pubDate) { public MetaTagsBuilder addPubDate(PubDate pubDate) {
if (pubDate.year() > 1900) { if (pubDate.year() > 1900) {

View File

@@ -6,6 +6,7 @@ import nu.marginalia.converting.model.DisqualifiedException;
import nu.marginalia.converting.model.DocumentHeaders; import nu.marginalia.converting.model.DocumentHeaders;
import nu.marginalia.converting.model.GeneratorType; import nu.marginalia.converting.model.GeneratorType;
import nu.marginalia.converting.model.ProcessedDocumentDetails; import nu.marginalia.converting.model.ProcessedDocumentDetails;
import nu.marginalia.converting.processor.AcceptableAds;
import nu.marginalia.converting.processor.DocumentClass; import nu.marginalia.converting.processor.DocumentClass;
import nu.marginalia.converting.processor.MetaRobotsTag; import nu.marginalia.converting.processor.MetaRobotsTag;
import nu.marginalia.converting.processor.logic.*; import nu.marginalia.converting.processor.logic.*;
@@ -32,11 +33,11 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard; import nu.marginalia.model.html.HtmlStandard;
import nu.marginalia.model.idx.DocumentFlags; import nu.marginalia.model.idx.DocumentFlags;
import nu.marginalia.model.idx.DocumentMetadata; import nu.marginalia.model.idx.DocumentMetadata;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document; import org.jsoup.nodes.Document;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.HashSet; import java.util.HashSet;
@@ -51,7 +52,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
private final double minDocumentQuality; private final double minDocumentQuality;
private final FeatureExtractor featureExtractor; private final FeatureExtractor featureExtractor;
private final TitleExtractor titleExtractor;
private final DocumentKeywordExtractor keywordExtractor; private final DocumentKeywordExtractor keywordExtractor;
private final PubDateSniffer pubDateSniffer; private final PubDateSniffer pubDateSniffer;
@@ -74,7 +74,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
@Named("min-document-quality") Double minDocumentQuality, @Named("min-document-quality") Double minDocumentQuality,
LanguageFilter languageFilter, LanguageFilter languageFilter,
FeatureExtractor featureExtractor, FeatureExtractor featureExtractor,
TitleExtractor titleExtractor,
DocumentKeywordExtractor keywordExtractor, DocumentKeywordExtractor keywordExtractor,
PubDateSniffer pubDateSniffer, PubDateSniffer pubDateSniffer,
DocumentLengthLogic documentLengthLogic, DocumentLengthLogic documentLengthLogic,
@@ -89,7 +88,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
this.minDocumentQuality = minDocumentQuality; this.minDocumentQuality = minDocumentQuality;
this.featureExtractor = featureExtractor; this.featureExtractor = featureExtractor;
this.titleExtractor = titleExtractor;
this.keywordExtractor = keywordExtractor; this.keywordExtractor = keywordExtractor;
this.pubDateSniffer = pubDateSniffer; this.pubDateSniffer = pubDateSniffer;
this.metaRobotsTag = metaRobotsTag; this.metaRobotsTag = metaRobotsTag;
@@ -108,19 +106,17 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
public DetailsWithWords createDetails(CrawledDocument crawledDocument, public DetailsWithWords createDetails(CrawledDocument crawledDocument,
LinkTexts linkTexts, LinkTexts linkTexts,
DocumentClass documentClass) DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException { throws DisqualifiedException, URISyntaxException, IOException {
String documentBody = crawledDocument.documentBody; if (languageFilter.isBlockedUnicodeRange(crawledDocument.documentBody(512))) {
if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualificationReason.LANGUAGE); throw new DisqualifiedException(DisqualificationReason.LANGUAGE);
} }
if (documentBody.length() > MAX_DOCUMENT_LENGTH_BYTES) { // 128kb Document doc = crawledDocument.parseBody();
documentBody = documentBody.substring(0, MAX_DOCUMENT_LENGTH_BYTES);
}
Document doc = Jsoup.parse(documentBody); if (AcceptableAds.hasAcceptableAdsTag(doc)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.ACCEPTABLE_ADS);
}
if (!metaRobotsTag.allowIndexingByMetaTag(doc)) { if (!metaRobotsTag.allowIndexingByMetaTag(doc)) {
throw new DisqualifiedException(DisqualificationReason.FORBIDDEN); throw new DisqualifiedException(DisqualificationReason.FORBIDDEN);
@@ -138,32 +134,33 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
} }
var prunedDoc = specialization.prune(doc); var prunedDoc = specialization.prune(doc);
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
var ret = new ProcessedDocumentDetails();
final int length = getLength(doc); final int length = getLength(doc);
final HtmlStandard standard = getHtmlStandard(doc); final HtmlStandard standard = getHtmlStandard(doc);
final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length); final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length);
if (isDisqualified(documentClass, url, quality, doc.title())) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
var ret = new ProcessedDocumentDetails();
ret.length = length; ret.length = length;
ret.standard = standard; ret.standard = standard;
ret.title = specialization.getTitle(doc, dld, crawledDocument.url); ret.title = specialization.getTitle(doc, dld, crawledDocument.url);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld); final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld);
ret.features = features; ret.features = features;
ret.quality = documentValuator.adjustQuality(quality, features); ret.quality = documentValuator.adjustQuality(quality, features);
ret.hashCode = dld.localitySensitiveHashCode(); ret.hashCode = dld.localitySensitiveHashCode();
if (isDisqualified(documentClass, url, quality, ret.title)) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true); PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true);
EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type()); EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type());

View File

@@ -71,7 +71,7 @@ public class PlainTextDocumentProcessorPlugin extends AbstractDocumentProcessorP
DocumentClass documentClass) DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException { throws DisqualifiedException, URISyntaxException {
String documentBody = crawledDocument.documentBody; String documentBody = crawledDocument.documentBody();
if (languageFilter.isBlockedUnicodeRange(documentBody)) { if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE); throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE);

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.DocumentMetadata;
import nu.marginalia.model.idx.WordFlags; import nu.marginalia.model.idx.WordFlags;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.List; import java.util.List;
@@ -50,7 +51,7 @@ public class SideloaderProcessing {
"OK", "OK",
"NP", "NP",
"", "",
body, body.getBytes(StandardCharsets.UTF_8),
false, false,
null, null,
null null

View File

@@ -127,7 +127,7 @@ public class EncyclopediaMarginaliaNuSideloader implements SideloadSource, AutoC
} }
fullHtml.append("</div></body></html>"); fullHtml.append("</div></body></html>");
var doc = sideloaderProcessing return sideloaderProcessing
.processDocument(fullUrl, .processDocument(fullUrl,
fullHtml.toString(), fullHtml.toString(),
List.of("encyclopedia", "wiki"), List.of("encyclopedia", "wiki"),
@@ -137,8 +137,6 @@ public class EncyclopediaMarginaliaNuSideloader implements SideloadSource, AutoC
anchorTextKeywords.getAnchorTextKeywords(domainLinks, new EdgeUrl(fullUrl)), anchorTextKeywords.getAnchorTextKeywords(domainLinks, new EdgeUrl(fullUrl)),
LocalDate.now().getYear(), LocalDate.now().getYear(),
10_000_000); 10_000_000);
return doc;
} }
private String normalizeUtf8(String url) { private String normalizeUtf8(String url) {

View File

@@ -106,11 +106,7 @@ public class WarcSideloader implements SideloadSource, AutoCloseable {
return false; return false;
var url = new EdgeUrl(warcResponse.target()); var url = new EdgeUrl(warcResponse.target());
if (!Objects.equals(url.getDomain(), domain)) { return Objects.equals(url.getDomain(), domain);
return false;
}
return true;
} catch (Exception e) { } catch (Exception e) {
logger.warn("Failed to process response", e); logger.warn("Failed to process response", e);
} }

View File

@@ -39,6 +39,9 @@ public class ConverterWriter implements AutoCloseable {
workerThread.start(); workerThread.start();
} }
/** Queue and eventually write the domain into the converter journal
* The domain object will be closed after it's processed.
* */
public void accept(@Nullable ConverterBatchWritableIf domain) { public void accept(@Nullable ConverterBatchWritableIf domain) {
if (null == domain) if (null == domain)
return; return;
@@ -72,15 +75,15 @@ public class ConverterWriter implements AutoCloseable {
if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) { if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) {
logger.warn("Skipping already logged item {}", id); logger.warn("Skipping already logged item {}", id);
}
else {
currentWriter.write(data);
workLog.logItem(id);
data.close(); data.close();
continue;
} }
currentWriter.write(data);
workLog.logItem(id);
switcher.tick(); switcher.tick();
data.close();
} }
} }
catch (Exception ex) { catch (Exception ex) {

View File

@@ -11,7 +11,6 @@ import nu.marginalia.slop.column.primitive.IntColumn;
import nu.marginalia.slop.column.primitive.LongColumn; import nu.marginalia.slop.column.primitive.LongColumn;
import nu.marginalia.slop.column.string.EnumColumn; import nu.marginalia.slop.column.string.EnumColumn;
import nu.marginalia.slop.column.string.StringColumn; import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.column.string.TxtStringColumn;
import nu.marginalia.slop.desc.StorageType; import nu.marginalia.slop.desc.StorageType;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
@@ -182,8 +181,8 @@ public record SlopDocumentRecord(
} }
// Basic information // Basic information
private static final TxtStringColumn domainsColumn = new TxtStringColumn("domain", StandardCharsets.UTF_8, StorageType.GZIP); private static final StringColumn domainsColumn = new StringColumn("domain", StandardCharsets.UTF_8, StorageType.GZIP);
private static final TxtStringColumn urlsColumn = new TxtStringColumn("url", StandardCharsets.UTF_8, StorageType.GZIP); private static final StringColumn urlsColumn = new StringColumn("url", StandardCharsets.UTF_8, StorageType.GZIP);
private static final VarintColumn ordinalsColumn = new VarintColumn("ordinal", StorageType.PLAIN); private static final VarintColumn ordinalsColumn = new VarintColumn("ordinal", StorageType.PLAIN);
private static final EnumColumn statesColumn = new EnumColumn("state", StandardCharsets.US_ASCII, StorageType.PLAIN); private static final EnumColumn statesColumn = new EnumColumn("state", StandardCharsets.US_ASCII, StorageType.PLAIN);
private static final StringColumn stateReasonsColumn = new StringColumn("stateReason", StandardCharsets.US_ASCII, StorageType.GZIP); private static final StringColumn stateReasonsColumn = new StringColumn("stateReason", StandardCharsets.US_ASCII, StorageType.GZIP);
@@ -211,7 +210,7 @@ public record SlopDocumentRecord(
private static final VarintCodedSequenceArrayColumn spansColumn = new VarintCodedSequenceArrayColumn("spans", StorageType.ZSTD); private static final VarintCodedSequenceArrayColumn spansColumn = new VarintCodedSequenceArrayColumn("spans", StorageType.ZSTD);
public static class KeywordsProjectionReader extends SlopTable { public static class KeywordsProjectionReader extends SlopTable {
private final TxtStringColumn.Reader domainsReader; private final StringColumn.Reader domainsReader;
private final VarintColumn.Reader ordinalsReader; private final VarintColumn.Reader ordinalsReader;
private final IntColumn.Reader htmlFeaturesReader; private final IntColumn.Reader htmlFeaturesReader;
private final LongColumn.Reader domainMetadataReader; private final LongColumn.Reader domainMetadataReader;
@@ -275,8 +274,8 @@ public record SlopDocumentRecord(
} }
public static class MetadataReader extends SlopTable { public static class MetadataReader extends SlopTable {
private final TxtStringColumn.Reader domainsReader; private final StringColumn.Reader domainsReader;
private final TxtStringColumn.Reader urlsReader; private final StringColumn.Reader urlsReader;
private final VarintColumn.Reader ordinalsReader; private final VarintColumn.Reader ordinalsReader;
private final StringColumn.Reader titlesReader; private final StringColumn.Reader titlesReader;
private final StringColumn.Reader descriptionsReader; private final StringColumn.Reader descriptionsReader;
@@ -332,8 +331,8 @@ public record SlopDocumentRecord(
} }
public static class Writer extends SlopTable { public static class Writer extends SlopTable {
private final TxtStringColumn.Writer domainsWriter; private final StringColumn.Writer domainsWriter;
private final TxtStringColumn.Writer urlsWriter; private final StringColumn.Writer urlsWriter;
private final VarintColumn.Writer ordinalsWriter; private final VarintColumn.Writer ordinalsWriter;
private final EnumColumn.Writer statesWriter; private final EnumColumn.Writer statesWriter;
private final StringColumn.Writer stateReasonsWriter; private final StringColumn.Writer stateReasonsWriter;

View File

@@ -98,7 +98,7 @@ public class ConvertingIntegrationTest {
@Test @Test
public void testMemexMarginaliaNuSideloadProcessing() throws IOException { public void testMemexMarginaliaNuSideloadProcessing() throws IOException {
var ret = domainProcessor.sideloadProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100); var ret = domainProcessor.simpleProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100);
assertNotNull(ret); assertNotNull(ret);
assertEquals("memex.marginalia.nu", ret.id()); assertEquals("memex.marginalia.nu", ret.id());
@@ -146,7 +146,7 @@ public class ConvertingIntegrationTest {
"OK", "OK",
"", "",
"", "",
readClassPathFile(p.toString()), readClassPathFile(p.toString()).getBytes(),
false, false,
null, null,
null null

View File

@@ -20,6 +20,7 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawledDomain; import nu.marginalia.model.crawldata.CrawledDomain;
import nu.marginalia.model.crawldata.SerializableCrawlData; import nu.marginalia.model.crawldata.SerializableCrawlData;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter; import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.junit.jupiter.api.*; import org.junit.jupiter.api.*;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -200,23 +201,23 @@ public class CrawlingThenConvertingIntegrationTest {
@Test @Test
public void crawlRobotsTxt() throws Exception { public void crawlRobotsTxt() throws Exception {
var specs = new CrawlerMain.CrawlSpecRecord("search.marginalia.nu", 5, var specs = new CrawlerMain.CrawlSpecRecord("marginalia-search.com", 5,
List.of("https://search.marginalia.nu/search?q=hello+world") List.of("https://marginalia-search.com/search?q=hello+world")
); );
CrawledDomain domain = crawl(specs); CrawledDomain domain = crawl(specs);
assertFalse(domain.doc.isEmpty()); assertFalse(domain.doc.isEmpty());
assertEquals("OK", domain.crawlerStatus); assertEquals("OK", domain.crawlerStatus);
assertEquals("search.marginalia.nu", domain.domain); assertEquals("marginalia-search.com", domain.domain);
Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet()); Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet());
assertTrue(allUrls.contains("https://search.marginalia.nu/search"), "We expect a record for entities that are forbidden"); assertTrue(allUrls.contains("https://marginalia-search.com/search"), "We expect a record for entities that are forbidden");
var output = process(); var output = process();
assertNotNull(output); assertNotNull(output);
assertFalse(output.documents.isEmpty()); assertFalse(output.documents.isEmpty());
assertEquals(new EdgeDomain("search.marginalia.nu"), output.domain); assertEquals(new EdgeDomain("marginalia-search.com"), output.domain);
assertEquals(DomainIndexingState.ACTIVE, output.state); assertEquals(DomainIndexingState.ACTIVE, output.state);
for (var doc : output.documents) { for (var doc : output.documents) {
@@ -246,7 +247,7 @@ public class CrawlingThenConvertingIntegrationTest {
private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception { private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception {
List<SerializableCrawlData> data = new ArrayList<>(); List<SerializableCrawlData> data = new ArrayList<>();
try (var recorder = new WarcRecorder(fileName); try (var recorder = new WarcRecorder(fileName, new BasicCookieStore());
var db = new DomainStateDb(dbTempFile)) var db = new DomainStateDb(dbTempFile))
{ {
new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain(); new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain();

View File

@@ -55,16 +55,19 @@ dependencies {
implementation libs.zstd implementation libs.zstd
implementation libs.jwarc implementation libs.jwarc
implementation libs.crawlercommons implementation libs.crawlercommons
implementation libs.okhttp3
implementation libs.jsoup implementation libs.jsoup
implementation libs.opencsv implementation libs.opencsv
implementation libs.fastutil implementation libs.fastutil
implementation libs.bundles.mariadb implementation libs.bundles.mariadb
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit testImplementation libs.bundles.junit
testImplementation libs.mockito testImplementation libs.mockito
testImplementation libs.wiremock
testImplementation project(':code:processes:test-data') testImplementation project(':code:processes:test-data')
} }

View File

@@ -2,11 +2,16 @@ package nu.marginalia.contenttype;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets;
/** Content type and charset of a document /** Content type and charset of a document
* @param contentType The content type, e.g. "text/html" * @param contentType The content type, e.g. "text/html"
* @param charset The charset, e.g. "UTF-8" * @param charset The charset, e.g. "UTF-8"
*/ */
public record ContentType(String contentType, String charset) { public record ContentType(String contentType, String charset) {
public static ContentType parse(String contentTypeHeader) { public static ContentType parse(String contentTypeHeader) {
if (contentTypeHeader == null || contentTypeHeader.isBlank()) if (contentTypeHeader == null || contentTypeHeader.isBlank())
return new ContentType(null, null); return new ContentType(null, null);
@@ -15,9 +20,31 @@ public record ContentType(String contentType, String charset) {
String contentType = parts[0].trim(); String contentType = parts[0].trim();
String charset = parts.length > 1 ? parts[1].trim() : "UTF-8"; String charset = parts.length > 1 ? parts[1].trim() : "UTF-8";
if (charset.toLowerCase().startsWith("charset=")) {
charset = charset.substring("charset=".length());
}
return new ContentType(contentType, charset); return new ContentType(contentType, charset);
} }
/** Best effort method for turning the provided charset string into a Java charset method,
* with some guesswork-heuristics for when it doesn't work
*/
public Charset asCharset() {
try {
if (Charset.isSupported(charset)) {
return Charset.forName(charset);
} else if (charset.equalsIgnoreCase("macintosh-latin")) {
return StandardCharsets.ISO_8859_1;
} else {
return StandardCharsets.UTF_8;
}
}
catch (IllegalCharsetNameException ex) { // thrown by Charset.isSupported()
return StandardCharsets.UTF_8;
}
}
public boolean is(String contentType) { public boolean is(String contentType) {
return this.contentType.equalsIgnoreCase(contentType); return this.contentType.equalsIgnoreCase(contentType);
} }

View File

@@ -1,9 +1,12 @@
package nu.marginalia.contenttype; package nu.marginalia.contenttype;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset; import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.charset.UnsupportedCharsetException;
import java.util.Map; import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
@@ -23,24 +26,25 @@ public class DocumentBodyToString {
return new String(data, charset); return new String(data, charset);
} }
public static Document getParsedData(ContentType type, byte[] data, int maxLength, String url) throws IOException {
final Charset charset;
if (type.charset() == null || type.charset().isBlank()) {
charset = StandardCharsets.UTF_8;
} else {
charset = charsetMap.computeIfAbsent(type, DocumentBodyToString::computeCharset);
}
ByteArrayInputStream bais = new ByteArrayInputStream(data, 0, Math.min(data.length, maxLength));
return Jsoup.parse(bais, charset.name(), url);
}
private static Charset computeCharset(ContentType type) { private static Charset computeCharset(ContentType type) {
try { if (type.charset() == null || type.charset().isBlank())
if (type.charset() == null || type.charset().isBlank())
return StandardCharsets.UTF_8;
else {
return Charset.forName(type.charset());
}
}
catch (IllegalCharsetNameException ex) {
// Fall back to UTF-8 if we don't understand what this is. It's *probably* fine? Maybe?
return StandardCharsets.UTF_8; return StandardCharsets.UTF_8;
} else {
catch (UnsupportedCharsetException ex) { return type.asCharset();
// This is usually like Macintosh Latin
// (https://en.wikipedia.org/wiki/Macintosh_Latin_encoding)
//
// It's close enough to 8859-1 to serve
return StandardCharsets.ISO_8859_1;
} }
} }
} }

View File

@@ -19,22 +19,19 @@ import nu.marginalia.crawl.retreival.DomainProber;
import nu.marginalia.crawl.warc.WarcArchiverFactory; import nu.marginalia.crawl.warc.WarcArchiverFactory;
import nu.marginalia.crawl.warc.WarcArchiverIf; import nu.marginalia.crawl.warc.WarcArchiverIf;
import nu.marginalia.db.DomainBlacklist; import nu.marginalia.db.DomainBlacklist;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.CrawlerOutputFile; import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.mq.MessageQueueFactory; import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import nu.marginalia.process.ProcessConfiguration; import nu.marginalia.process.ProcessConfiguration;
import nu.marginalia.process.ProcessConfigurationModule; import nu.marginalia.process.ProcessConfigurationModule;
import nu.marginalia.process.ProcessMainClass; import nu.marginalia.process.ProcessMainClass;
import nu.marginalia.process.control.ProcessHeartbeatImpl; import nu.marginalia.process.control.ProcessHeartbeatImpl;
import nu.marginalia.process.log.WorkLog; import nu.marginalia.process.log.WorkLog;
import nu.marginalia.service.module.DatabaseModule; import nu.marginalia.service.module.DatabaseModule;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService; import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageId; import nu.marginalia.storage.model.FileStorageId;
import nu.marginalia.util.SimpleBlockingThreadPool; import nu.marginalia.util.SimpleBlockingThreadPool;
import okhttp3.ConnectionPool;
import okhttp3.Dispatcher;
import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -44,10 +41,7 @@ import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.nio.file.StandardCopyOption; import java.nio.file.StandardCopyOption;
import java.security.Security; import java.security.Security;
import java.util.ArrayList; import java.util.*;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicInteger;
@@ -85,6 +79,7 @@ public class CrawlerMain extends ProcessMainClass {
@Inject @Inject
public CrawlerMain(UserAgent userAgent, public CrawlerMain(UserAgent userAgent,
HttpFetcherImpl httpFetcher,
ProcessHeartbeatImpl heartbeat, ProcessHeartbeatImpl heartbeat,
MessageQueueFactory messageQueueFactory, DomainProber domainProber, MessageQueueFactory messageQueueFactory, DomainProber domainProber,
FileStorageService fileStorageService, FileStorageService fileStorageService,
@@ -98,6 +93,7 @@ public class CrawlerMain extends ProcessMainClass {
super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX); super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX);
this.userAgent = userAgent; this.userAgent = userAgent;
this.fetcher = httpFetcher;
this.heartbeat = heartbeat; this.heartbeat = heartbeat;
this.domainProber = domainProber; this.domainProber = domainProber;
this.fileStorageService = fileStorageService; this.fileStorageService = fileStorageService;
@@ -107,14 +103,19 @@ public class CrawlerMain extends ProcessMainClass {
this.blacklist = blacklist; this.blacklist = blacklist;
this.node = processConfiguration.node(); this.node = processConfiguration.node();
SimpleBlockingThreadPool.ThreadType threadType;
if (Boolean.getBoolean("crawler.useVirtualThreads")) {
threadType = SimpleBlockingThreadPool.ThreadType.VIRTUAL;
}
else {
threadType = SimpleBlockingThreadPool.ThreadType.PLATFORM;
}
pool = new SimpleBlockingThreadPool("CrawlerPool", pool = new SimpleBlockingThreadPool("CrawlerPool",
Integer.getInteger("crawler.poolSize", 256), Integer.getInteger("crawler.poolSize", 256),
1); 1,
threadType);
fetcher = new HttpFetcherImpl(userAgent,
new Dispatcher(),
new ConnectionPool(5, 10, TimeUnit.SECONDS)
);
// Wait for the blacklist to be loaded before starting the crawl // Wait for the blacklist to be loaded before starting the crawl
blacklist.waitUntilLoaded(); blacklist.waitUntilLoaded();
@@ -132,6 +133,10 @@ public class CrawlerMain extends ProcessMainClass {
System.setProperty("sun.net.client.defaultConnectTimeout", "30000"); System.setProperty("sun.net.client.defaultConnectTimeout", "30000");
System.setProperty("sun.net.client.defaultReadTimeout", "30000"); System.setProperty("sun.net.client.defaultReadTimeout", "30000");
// Set the maximum number of connections to keep alive in the connection pool
System.setProperty("jdk.httpclient.idleTimeout", "15"); // 15 seconds
System.setProperty("jdk.httpclient.connectionPoolSize", "256");
// We don't want to use too much memory caching sessions for https // We don't want to use too much memory caching sessions for https
System.setProperty("javax.net.ssl.sessionCacheSize", "2048"); System.setProperty("javax.net.ssl.sessionCacheSize", "2048");
@@ -225,10 +230,7 @@ public class CrawlerMain extends ProcessMainClass {
logger.info("Loaded {} domains", crawlSpecRecords.size()); logger.info("Loaded {} domains", crawlSpecRecords.size());
// Shuffle the domains to ensure we get a good mix of domains in each crawl, crawlSpecRecords.sort(crawlSpecArrangement(crawlSpecRecords));
// so that e.g. the big domains don't get all crawled at once, or we end up
// crawling the same server in parallel from different subdomains...
Collections.shuffle(crawlSpecRecords);
// First a validation run to ensure the file is all good to parse // First a validation run to ensure the file is all good to parse
if (crawlSpecRecords.isEmpty()) { if (crawlSpecRecords.isEmpty()) {
@@ -249,9 +251,14 @@ public class CrawlerMain extends ProcessMainClass {
// (this happens when the process is restarted after a crash or a shutdown) // (this happens when the process is restarted after a crash or a shutdown)
tasksDone.set(workLog.countFinishedJobs()); tasksDone.set(workLog.countFinishedJobs());
// Create crawl tasks and submit them to the pool for execution // List of deferred tasks used to ensure beneficial scheduling of domains with regard to DomainLocks,
// merely shuffling the domains tends to lead to a lot of threads being blocked waiting for a semphore,
// this will more aggressively attempt to schedule the jobs to avoid blocking
List<CrawlTask> taskList = new ArrayList<>();
// Create crawl tasks
for (CrawlSpecRecord crawlSpec : crawlSpecRecords) { for (CrawlSpecRecord crawlSpec : crawlSpecRecords) {
if (workLog.isJobFinished(crawlSpec.domain())) if (workLog.isJobFinished(crawlSpec.domain))
continue; continue;
var task = new CrawlTask( var task = new CrawlTask(
@@ -262,11 +269,22 @@ public class CrawlerMain extends ProcessMainClass {
domainStateDb, domainStateDb,
workLog); workLog);
if (pendingCrawlTasks.putIfAbsent(crawlSpec.domain(), task) == null) { // Try to run immediately, to avoid unnecessarily keeping the entire work set in RAM
pool.submitQuietly(task); if (!trySubmitDeferredTask(task)) {
// Otherwise add to the taskList for deferred execution
taskList.add(task);
} }
} }
// Schedule viable tasks for execution until list is empty
while (!taskList.isEmpty()) {
taskList.removeIf(this::trySubmitDeferredTask);
// Add a small pause here to avoid busy looping toward the end of the execution cycle when
// we might have no new viable tasks to run for hours on end
TimeUnit.MILLISECONDS.sleep(50);
}
logger.info("Shutting down the pool, waiting for tasks to complete..."); logger.info("Shutting down the pool, waiting for tasks to complete...");
pool.shutDown(); pool.shutDown();
@@ -291,6 +309,51 @@ public class CrawlerMain extends ProcessMainClass {
} }
} }
/** Create a comparator that sorts the crawl specs in a way that is beneficial for the crawl,
* we want to enqueue domains that have common top domains first, but otherwise have a random
* order.
* <p></p>
* Note, we can't use hash codes for randomization as it is not desirable to have the same order
* every time the process is restarted (and CrawlSpecRecord is a record, which defines equals and
* hashcode based on the fields).
* */
private Comparator<CrawlSpecRecord> crawlSpecArrangement(List<CrawlSpecRecord> records) {
Random r = new Random();
Map<String, Integer> topDomainCounts = new HashMap<>(4 + (int) Math.sqrt(records.size()));
Map<String, Integer> randomOrder = new HashMap<>(records.size());
for (var spec : records) {
topDomainCounts.merge(EdgeDomain.getTopDomain(spec.domain), 1, Integer::sum);
randomOrder.put(spec.domain, r.nextInt());
}
return Comparator.comparing((CrawlSpecRecord spec) -> topDomainCounts.getOrDefault(EdgeDomain.getTopDomain(spec.domain), 0) >= 8)
.reversed()
.thenComparing(spec -> randomOrder.get(spec.domain))
.thenComparing(Record::hashCode); // non-deterministic tie-breaker to
}
/** Submit a task for execution if it can be run, returns true if it was submitted
* or if it can be discarded */
private boolean trySubmitDeferredTask(CrawlTask task) {
if (!task.canRun()) {
return false;
}
if (pendingCrawlTasks.putIfAbsent(task.domain, task) != null) {
return true; // task has already run, duplicate in crawl specs
}
try {
// This blocks the caller when the pool is full
pool.submitQuietly(task);
return true;
}
catch (RuntimeException ex) {
logger.error("Failed to submit task " + task.domain, ex);
return false;
}
}
public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception { public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception {
runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath()); runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath());
@@ -348,12 +411,23 @@ public class CrawlerMain extends ProcessMainClass {
this.id = Integer.toHexString(domain.hashCode()); this.id = Integer.toHexString(domain.hashCode());
} }
/** Best effort indicator whether we could start this now without getting stuck in
* DomainLocks purgatory */
public boolean canRun() {
return domainLocks.canLock(new EdgeDomain(domain));
}
@Override @Override
public void run() throws Exception { public void run() throws Exception {
if (workLog.isJobFinished(domain)) { // No-Op
logger.info("Omitting task {}, as it is already run", domain);
return;
}
Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE); Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE);
Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP); Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP);
Path parquetFile = CrawlerOutputFile.createParquetPath(outputDir, id, domain); Path slopFile = CrawlerOutputFile.createSlopPath(outputDir, id, domain);
// Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data // Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data
// while writing to the same file name as before // while writing to the same file name as before
@@ -364,10 +438,10 @@ public class CrawlerMain extends ProcessMainClass {
Files.deleteIfExists(tempFile); Files.deleteIfExists(tempFile);
} }
try (var warcRecorder = new WarcRecorder(newWarcFile); // write to a temp file for now try (var warcRecorder = new WarcRecorder(newWarcFile, fetcher); // write to a temp file for now
var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder); var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder);
CrawlDataReference reference = getReference(); CrawlDataReference reference = getReference()
) )
{ {
// Resume the crawl if it was aborted // Resume the crawl if it was aborted
if (Files.exists(tempFile)) { if (Files.exists(tempFile)) {
@@ -387,15 +461,15 @@ public class CrawlerMain extends ProcessMainClass {
reference.delete(); reference.delete();
// Convert the WARC file to Parquet // Convert the WARC file to Parquet
CrawledDocumentParquetRecordFileWriter SlopCrawlDataRecord
.convertWarc(domain, userAgent, newWarcFile, parquetFile); .convertWarc(domain, userAgent, newWarcFile, slopFile);
// Optionally archive the WARC file if full retention is enabled, // Optionally archive the WARC file if full retention is enabled,
// otherwise delete it: // otherwise delete it:
warcArchiver.consumeWarc(newWarcFile, domain); warcArchiver.consumeWarc(newWarcFile, domain);
// Mark the domain as finished in the work log // Mark the domain as finished in the work log
workLog.setJobToFinished(domain, parquetFile.toString(), size); workLog.setJobToFinished(domain, slopFile.toString(), size);
// Update the progress bar // Update the progress bar
heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks); heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks);
@@ -405,7 +479,7 @@ public class CrawlerMain extends ProcessMainClass {
logger.error("Error fetching domain " + domain, e); logger.error("Error fetching domain " + domain, e);
} }
finally { finally {
// We don't need to double-count these; it's also kept int he workLog // We don't need to double-count these; it's also kept in the workLog
pendingCrawlTasks.remove(domain); pendingCrawlTasks.remove(domain);
Thread.currentThread().setName("[idle]"); Thread.currentThread().setName("[idle]");
@@ -416,11 +490,22 @@ public class CrawlerMain extends ProcessMainClass {
private CrawlDataReference getReference() { private CrawlDataReference getReference() {
try { try {
return new CrawlDataReference(CrawledDomainReader.createDataStream(outputDir, domain, id)); Path slopPath = CrawlerOutputFile.getSlopPath(outputDir, id, domain);
} catch (IOException e) { if (Files.exists(slopPath)) {
return new CrawlDataReference(slopPath);
}
Path parquetPath = CrawlerOutputFile.getParquetPath(outputDir, id, domain);
if (Files.exists(parquetPath)) {
slopPath = migrateParquetData(parquetPath, domain, outputDir);
return new CrawlDataReference(slopPath);
}
} catch (Exception e) {
logger.debug("Failed to read previous crawl data for {}", specification.domain()); logger.debug("Failed to read previous crawl data for {}", specification.domain());
return new CrawlDataReference();
} }
return new CrawlDataReference();
} }
} }
@@ -480,4 +565,20 @@ public class CrawlerMain extends ProcessMainClass {
} }
} }
} }
// Migrate from parquet to slop if necessary
//
// This must be synchronized as chewing through parquet files in parallel leads to enormous memory overhead
private synchronized Path migrateParquetData(Path inputPath, String domain, Path crawlDataRoot) throws IOException {
if (!inputPath.toString().endsWith(".parquet")) {
return inputPath;
}
Path outputFile = CrawlerOutputFile.createSlopPath(crawlDataRoot, Integer.toHexString(domain.hashCode()), domain);
SlopCrawlDataRecord.convertFromParquet(inputPath, outputFile);
return outputFile;
}
} }

View File

@@ -1,5 +1,8 @@
package nu.marginalia.crawl; package nu.marginalia.crawl;
import com.google.inject.Inject;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageType;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -8,7 +11,9 @@ import java.nio.file.Path;
import java.sql.Connection; import java.sql.Connection;
import java.sql.DriverManager; import java.sql.DriverManager;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.Duration;
import java.time.Instant; import java.time.Instant;
import java.util.Objects;
import java.util.Optional; import java.util.Optional;
/** Supplemental sqlite database for storing the summary of a crawl. /** Supplemental sqlite database for storing the summary of a crawl.
@@ -20,6 +25,17 @@ public class DomainStateDb implements AutoCloseable {
private final Connection connection; private final Connection connection;
public record CrawlMeta(
String domainName,
Instant lastFullCrawl,
Duration recrawlTime,
Duration crawlTime,
int recrawlErrors,
int crawlChanges,
int totalCrawlSize
) {}
public record SummaryRecord( public record SummaryRecord(
String domainName, String domainName,
Instant lastUpdated, Instant lastUpdated,
@@ -60,7 +76,31 @@ public class DomainStateDb implements AutoCloseable {
} }
public DomainStateDb(Path filename) throws SQLException { public record FaviconRecord(String contentType, byte[] imageData) {}
@Inject
public DomainStateDb(FileStorageService fileStorageService) throws SQLException {
this(findFilename(fileStorageService));
}
private static Path findFilename(FileStorageService fileStorageService) throws SQLException {
var fsId = fileStorageService.getOnlyActiveFileStorage(FileStorageType.CRAWL_DATA);
if (fsId.isPresent()) {
var fs = fileStorageService.getStorage(fsId.get());
return fs.asPath().resolve("domainstate.db");
}
else {
return null;
}
}
public DomainStateDb(@Nullable Path filename) throws SQLException {
if (null == filename) {
connection = null;
return;
}
String sqliteDbString = "jdbc:sqlite:" + filename.toString(); String sqliteDbString = "jdbc:sqlite:" + filename.toString();
connection = DriverManager.getConnection(sqliteDbString); connection = DriverManager.getConnection(sqliteDbString);
@@ -74,18 +114,102 @@ public class DomainStateDb implements AutoCloseable {
feedUrl TEXT feedUrl TEXT
) )
"""); """);
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS crawl_meta (
domain TEXT PRIMARY KEY,
lastFullCrawlEpochMs LONG NOT NULL,
recrawlTimeMs LONG NOT NULL,
recrawlErrors INTEGER NOT NULL,
crawlTimeMs LONG NOT NULL,
crawlChanges INTEGER NOT NULL,
totalCrawlSize INTEGER NOT NULL
)
""");
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS favicon (
domain TEXT PRIMARY KEY,
contentType TEXT NOT NULL,
icon BLOB NOT NULL
)
""");
stmt.execute("PRAGMA journal_mode=WAL"); stmt.execute("PRAGMA journal_mode=WAL");
} }
} }
@Override @Override
public void close() throws SQLException { public void close() throws SQLException {
connection.close(); if (connection != null) {
connection.close();
}
} }
public boolean isAvailable() {
return connection != null;
}
public void saveIcon(String domain, FaviconRecord faviconRecord) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO favicon (domain, contentType, icon)
VALUES(?, ?, ?)
""")) {
stmt.setString(1, domain);
stmt.setString(2, Objects.requireNonNullElse(faviconRecord.contentType, "application/octet-stream"));
stmt.setBytes(3, faviconRecord.imageData);
stmt.executeUpdate();
}
catch (SQLException ex) {
logger.error("Failed to insert favicon", ex);
}
}
public Optional<FaviconRecord> getIcon(String domain) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("SELECT contentType, icon FROM favicon WHERE DOMAIN = ?")) {
stmt.setString(1, domain);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(
new FaviconRecord(
rs.getString("contentType"),
rs.getBytes("icon")
)
);
}
} catch (SQLException e) {
logger.error("Failed to retrieve favicon", e);
}
return Optional.empty();
}
public void save(CrawlMeta crawlMeta) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO crawl_meta (domain, lastFullCrawlEpochMs, recrawlTimeMs, recrawlErrors, crawlTimeMs, crawlChanges, totalCrawlSize)
VALUES (?, ?, ?, ?, ?, ?, ?)
""")) {
stmt.setString(1, crawlMeta.domainName());
stmt.setLong(2, crawlMeta.lastFullCrawl.toEpochMilli());
stmt.setLong(3, crawlMeta.recrawlTime.toMillis());
stmt.setInt(4, crawlMeta.recrawlErrors);
stmt.setLong(5, crawlMeta.crawlTime.toMillis());
stmt.setInt(6, crawlMeta.crawlChanges);
stmt.setInt(7, crawlMeta.totalCrawlSize);
stmt.executeUpdate();
} catch (SQLException e) {
logger.error("Failed to insert crawl meta record", e);
}
}
public void save(SummaryRecord record) { public void save(SummaryRecord record) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement(""" try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl) INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl)
VALUES (?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?)
@@ -101,7 +225,38 @@ public class DomainStateDb implements AutoCloseable {
} }
} }
public Optional<SummaryRecord> get(String domainName) { public Optional<CrawlMeta> getMeta(String domainName) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("""
SELECT domain, lastFullCrawlEpochMs, recrawlTimeMs, recrawlErrors, crawlTimeMs, crawlChanges, totalCrawlSize
FROM crawl_meta
WHERE domain = ?
""")) {
stmt.setString(1, domainName);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(new CrawlMeta(
rs.getString("domain"),
Instant.ofEpochMilli(rs.getLong("lastFullCrawlEpochMs")),
Duration.ofMillis(rs.getLong("recrawlTimeMs")),
Duration.ofMillis(rs.getLong("crawlTimeMs")),
rs.getInt("recrawlErrors"),
rs.getInt("crawlChanges"),
rs.getInt("totalCrawlSize")
));
}
} catch (SQLException ex) {
logger.error("Failed to get crawl meta record", ex);
}
return Optional.empty();
}
public Optional<SummaryRecord> getSummary(String domainName) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement(""" try (var stmt = connection.prepareStatement("""
SELECT domain, lastUpdatedEpochMs, state, stateDesc, feedUrl SELECT domain, lastUpdatedEpochMs, state, stateDesc, feedUrl
FROM summary FROM summary

View File

@@ -1,6 +1,6 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import okhttp3.Request; import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
/** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */ /** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */
public record ContentTags(String etag, String lastMod) { public record ContentTags(String etag, String lastMod) {
@@ -17,7 +17,7 @@ public record ContentTags(String etag, String lastMod) {
} }
/** Paints the tags onto the request builder. */ /** Paints the tags onto the request builder. */
public void paint(Request.Builder getBuilder) { public void paint(ClassicRequestBuilder getBuilder) {
if (etag != null) { if (etag != null) {
getBuilder.addHeader("If-None-Match", etag); getBuilder.addHeader("If-None-Match", etag);

View File

@@ -1,33 +1,14 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import okhttp3.Cookie; import java.io.IOException;
import okhttp3.CookieJar; import java.net.CookieHandler;
import okhttp3.HttpUrl; import java.net.URI;
import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
public class Cookies { public class Cookies extends CookieHandler {
final ThreadLocal<ConcurrentHashMap<String, List<Cookie>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new); final ThreadLocal<ConcurrentHashMap<String, List<String>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new);
public CookieJar getJar() {
return new CookieJar() {
@Override
public void saveFromResponse(HttpUrl url, List<Cookie> cookies) {
if (!cookies.isEmpty()) {
cookieJar.get().put(url.host(), cookies);
}
}
@Override
public List<Cookie> loadForRequest(HttpUrl url) {
return cookieJar.get().getOrDefault(url.host(), Collections.emptyList());
}
};
}
public void clear() { public void clear() {
cookieJar.get().clear(); cookieJar.get().clear();
@@ -38,6 +19,16 @@ public class Cookies {
} }
public List<String> getCookies() { public List<String> getCookies() {
return cookieJar.get().values().stream().flatMap(List::stream).map(Cookie::toString).toList(); return cookieJar.get().values().stream().flatMap(List::stream).toList();
}
@Override
public Map<String, List<String>> get(URI uri, Map<String, List<String>> requestHeaders) throws IOException {
return cookieJar.get();
}
@Override
public void put(URI uri, Map<String, List<String>> responseHeaders) throws IOException {
cookieJar.get().putAll(responseHeaders);
} }
} }

View File

@@ -3,31 +3,31 @@ package nu.marginalia.crawl.fetcher;
import com.google.inject.ImplementedBy; import com.google.inject.ImplementedBy;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.apache.hc.client5.http.cookie.CookieStore;
import java.util.List; import java.util.List;
@ImplementedBy(HttpFetcherImpl.class) @ImplementedBy(HttpFetcherImpl.class)
public interface HttpFetcher { public interface HttpFetcher extends AutoCloseable {
void setAllowAllContentTypes(boolean allowAllContentTypes); void setAllowAllContentTypes(boolean allowAllContentTypes);
List<String> getCookies(); CookieStore getCookies();
void clearCookies(); void clearCookies();
DomainProbeResult probeDomain(EdgeUrl url); DomainProbeResult probeDomain(EdgeUrl url);
ContentTypeProbeResult probeContentType(
EdgeUrl url,
WarcRecorder recorder,
ContentTags tags) throws HttpFetcherImpl.RateLimitException;
HttpFetchResult fetchContent(EdgeUrl url, HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder recorder, WarcRecorder recorder,
CrawlDelayTimer timer,
ContentTags tags, ContentTags tags,
ProbeType probeType) throws HttpFetcherImpl.RateLimitException, Exception; ProbeType probeType);
List<EdgeUrl> fetchSitemapUrls(String rootSitemapUrl, CrawlDelayTimer delayTimer);
SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder); SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder);
@@ -43,6 +43,7 @@ public interface HttpFetcher {
/** This domain redirects to another domain */ /** This domain redirects to another domain */
record Redirect(EdgeDomain domain) implements DomainProbeResult {} record Redirect(EdgeDomain domain) implements DomainProbeResult {}
record RedirectSameDomain_Internal(EdgeUrl domain) implements DomainProbeResult {}
/** If the retrieval of the probed url was successful, return the url as it was fetched /** If the retrieval of the probed url was successful, return the url as it was fetched
* (which may be different from the url we probed, if we attempted another URL schema). * (which may be different from the url we probed, if we attempted another URL schema).
@@ -53,7 +54,10 @@ public interface HttpFetcher {
} }
sealed interface ContentTypeProbeResult { sealed interface ContentTypeProbeResult {
record NoOp() implements ContentTypeProbeResult {}
record Ok(EdgeUrl resolvedUrl) implements ContentTypeProbeResult { } record Ok(EdgeUrl resolvedUrl) implements ContentTypeProbeResult { }
record HttpError(int statusCode, String message) implements ContentTypeProbeResult { }
record Redirect(EdgeUrl location) implements ContentTypeProbeResult { }
record BadContentType(String contentType, int statusCode) implements ContentTypeProbeResult { } record BadContentType(String contentType, int statusCode) implements ContentTypeProbeResult { }
record Timeout(java.lang.Exception ex) implements ContentTypeProbeResult { } record Timeout(java.lang.Exception ex) implements ContentTypeProbeResult { }
record Exception(java.lang.Exception ex) implements ContentTypeProbeResult { } record Exception(java.lang.Exception ex) implements ContentTypeProbeResult { }

View File

@@ -1,78 +1,151 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import com.google.inject.Inject; import com.google.inject.Inject;
import com.google.inject.Singleton;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import crawlercommons.robots.SimpleRobotRulesParser; import crawlercommons.robots.SimpleRobotRulesParser;
import nu.marginalia.UserAgent; import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.socket.FastTerminatingSocketFactory;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.crawl.fetcher.socket.NoSecuritySSL;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.ContentTypeLogic; import nu.marginalia.model.body.ContentTypeLogic;
import nu.marginalia.model.body.DocumentBodyExtractor; import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import okhttp3.ConnectionPool; import org.apache.hc.client5.http.ConnectionKeepAliveStrategy;
import okhttp3.Dispatcher; import org.apache.hc.client5.http.HttpRequestRetryStrategy;
import okhttp3.OkHttpClient; import org.apache.hc.client5.http.classic.HttpClient;
import okhttp3.Request; import org.apache.hc.client5.http.config.ConnectionConfig;
import org.apache.hc.client5.http.config.RequestConfig;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.apache.hc.client5.http.cookie.CookieStore;
import org.apache.hc.client5.http.cookie.StandardCookieSpec;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManagerBuilder;
import org.apache.hc.client5.http.ssl.DefaultClientTlsStrategy;
import org.apache.hc.core5.http.*;
import org.apache.hc.core5.http.io.HttpClientResponseHandler;
import org.apache.hc.core5.http.io.SocketConfig;
import org.apache.hc.core5.http.io.entity.EntityUtils;
import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
import org.apache.hc.core5.http.message.MessageSupport;
import org.apache.hc.core5.http.protocol.HttpContext;
import org.apache.hc.core5.util.TimeValue;
import org.apache.hc.core5.util.Timeout;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.parser.Parser;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import org.slf4j.Marker;
import org.slf4j.MarkerFactory;
import javax.net.ssl.X509TrustManager; import javax.net.ssl.SSLContext;
import java.io.InterruptedIOException; import java.io.IOException;
import java.net.SocketTimeoutException;
import java.net.URISyntaxException;
import java.security.NoSuchAlgorithmException;
import java.time.Duration; import java.time.Duration;
import java.util.List; import java.util.*;
import java.util.Objects; import java.util.concurrent.Semaphore;
import java.util.Optional;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
public class HttpFetcherImpl implements HttpFetcher { @Singleton
public class HttpFetcherImpl implements HttpFetcher, HttpRequestRetryStrategy {
private final Logger logger = LoggerFactory.getLogger(getClass()); private final Logger logger = LoggerFactory.getLogger(getClass());
private final String userAgentString; private final String userAgentString;
private final String userAgentIdentifier; private final String userAgentIdentifier;
private final Cookies cookies = new Cookies();
private final CookieStore cookies = new BasicCookieStore();
private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser(); private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser();
private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic(); private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic();
private final Marker crawlerAuditMarker = MarkerFactory.getMarker("CRAWLER");
private final LinkParser linkParser = new LinkParser();
@Override @Override
public void setAllowAllContentTypes(boolean allowAllContentTypes) { public void setAllowAllContentTypes(boolean allowAllContentTypes) {
contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes); contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes);
} }
private final OkHttpClient client; private final CloseableHttpClient client;
private static final FastTerminatingSocketFactory ftSocketFactory = new FastTerminatingSocketFactory(); private CloseableHttpClient createClient() throws NoSuchAlgorithmException {
final ConnectionConfig connectionConfig = ConnectionConfig.custom()
.setSocketTimeout(10, TimeUnit.SECONDS)
.setConnectTimeout(30, TimeUnit.SECONDS)
.build();
private OkHttpClient createClient(Dispatcher dispatcher, ConnectionPool pool) { final PoolingHttpClientConnectionManager connectionManager = PoolingHttpClientConnectionManagerBuilder.create()
var builder = new OkHttpClient.Builder(); .setMaxConnPerRoute(2)
if (dispatcher != null) { .setMaxConnTotal(5000)
builder.dispatcher(dispatcher); .setDefaultConnectionConfig(connectionConfig)
} .setTlsSocketStrategy(new DefaultClientTlsStrategy(SSLContext.getDefault()))
.build();
return builder.sslSocketFactory(NoSecuritySSL.buildSocketFactory(), (X509TrustManager) NoSecuritySSL.trustAllCerts[0]) connectionManager.setDefaultSocketConfig(SocketConfig.custom()
.socketFactory(ftSocketFactory) .setSoLinger(TimeValue.ofSeconds(15))
.hostnameVerifier(NoSecuritySSL.buildHostnameVerifyer()) .setSoTimeout(Timeout.ofSeconds(10))
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor()) .build()
.connectionPool(pool) );
.cookieJar(cookies.getJar())
.followRedirects(true)
.followSslRedirects(true)
.connectTimeout(8, TimeUnit.SECONDS)
.readTimeout(10, TimeUnit.SECONDS)
.writeTimeout(10, TimeUnit.SECONDS)
.build();
final RequestConfig defaultRequestConfig = RequestConfig.custom()
.setCookieSpec(StandardCookieSpec.RELAXED)
.setResponseTimeout(10, TimeUnit.SECONDS)
.setConnectionRequestTimeout(8, TimeUnit.SECONDS)
.build();
return HttpClients.custom()
.setDefaultCookieStore(cookies)
.setConnectionManager(connectionManager)
.setRetryStrategy(this)
.setKeepAliveStrategy(new ConnectionKeepAliveStrategy() {
// Default keep-alive duration is 3 minutes, but this is too long for us,
// as we are either going to re-use it fairly quickly or close it for a long time.
//
// So we set it to 30 seconds or clamp the server-provided value to a minimum of 10 seconds.
private static final TimeValue defaultValue = TimeValue.ofSeconds(30);
@Override
public TimeValue getKeepAliveDuration(HttpResponse response, HttpContext context) {
final Iterator<HeaderElement> it = MessageSupport.iterate(response, HeaderElements.KEEP_ALIVE);
while (it.hasNext()) {
final HeaderElement he = it.next();
final String param = he.getName();
final String value = he.getValue();
if (value == null)
continue;
if (!"timeout".equalsIgnoreCase(param))
continue;
try {
long timeout = Long.parseLong(value);
timeout = Math.clamp(timeout, 30, defaultValue.toSeconds());
return TimeValue.ofSeconds(timeout);
} catch (final NumberFormatException ignore) {
break;
}
}
return defaultValue;
}
})
.disableRedirectHandling()
.setDefaultRequestConfig(defaultRequestConfig)
.build();
} }
@Override @Override
public List<String> getCookies() { public CookieStore getCookies() {
return cookies.getCookies(); return cookies;
} }
@Override @Override
@@ -81,26 +154,32 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
@Inject @Inject
public HttpFetcherImpl(UserAgent userAgent, public HttpFetcherImpl(UserAgent userAgent)
Dispatcher dispatcher,
ConnectionPool connectionPool)
{ {
this.client = createClient(dispatcher, connectionPool); try {
this.client = createClient();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
this.userAgentString = userAgent.uaString(); this.userAgentString = userAgent.uaString();
this.userAgentIdentifier = userAgent.uaIdentifier(); this.userAgentIdentifier = userAgent.uaIdentifier();
} }
public HttpFetcherImpl(String userAgent) { public HttpFetcherImpl(String userAgent) {
this.client = createClient(null, new ConnectionPool()); try {
this.client = createClient();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
this.userAgentString = userAgent; this.userAgentString = userAgent;
this.userAgentIdentifier = userAgent; this.userAgentIdentifier = userAgent;
} }
// Not necessary in prod, but useful in test // Not necessary in prod, but useful in test
public void close() { public void close() throws IOException {
client.dispatcher().executorService().shutdown(); client.close();
client.connectionPool().evictAll();
} }
/** /**
* Probe the domain to see if it is reachable, attempting to identify which schema to use, * Probe the domain to see if it is reachable, attempting to identify which schema to use,
* and if there are any redirects. This is done by one or more HEAD requests. * and if there are any redirects. This is done by one or more HEAD requests.
@@ -110,23 +189,94 @@ public class HttpFetcherImpl implements HttpFetcher {
*/ */
@Override @Override
public DomainProbeResult probeDomain(EdgeUrl url) { public DomainProbeResult probeDomain(EdgeUrl url) {
var head = new Request.Builder().head().addHeader("User-agent", userAgentString) List<EdgeUrl> urls = new ArrayList<>();
.url(url.toString()) urls.add(url);
.build();
var call = client.newCall(head); int redirects = 0;
AtomicBoolean tryGet = new AtomicBoolean(false);
try (var rsp = call.execute()) { while (!urls.isEmpty() && ++redirects < 5) {
EdgeUrl requestUrl = new EdgeUrl(rsp.request().url().toString()); ClassicHttpRequest request;
if (!Objects.equals(requestUrl.domain, url.domain)) { EdgeUrl topUrl = urls.removeFirst();
return new DomainProbeResult.Redirect(requestUrl.domain); try {
if (tryGet.get()) {
request = ClassicRequestBuilder.get(topUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.addHeader("Range", "bytes=0-255")
.build();
} else {
request = ClassicRequestBuilder.head(topUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.build();
}
} catch (URISyntaxException e) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid URL");
} }
return new DomainProbeResult.Ok(requestUrl);
} try {
catch (Exception ex) { var result = SendLock.wrapSend(client, request, response -> {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, ex.getMessage()); EntityUtils.consume(response.getEntity());
return switch (response.getCode()) {
case 200 -> new DomainProbeResult.Ok(url);
case 405 -> {
if (!tryGet.get()) {
tryGet.set(true);
yield new DomainProbeResult.RedirectSameDomain_Internal(url);
}
else {
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "HTTP status 405, tried HEAD and GET?!");
}
}
case 301, 302, 307 -> {
var location = response.getFirstHeader("Location");
if (location != null) {
Optional<EdgeUrl> newUrl = linkParser.parseLink(topUrl, location.getValue());
if (newUrl.isEmpty()) {
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid location header on redirect");
}
EdgeUrl newEdgeUrl = newUrl.get();
if (newEdgeUrl.domain.equals(topUrl.domain)) {
yield new DomainProbeResult.RedirectSameDomain_Internal(newEdgeUrl);
}
else {
yield new DomainProbeResult.Redirect(newEdgeUrl.domain);
}
}
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "No location header on redirect");
}
default ->
new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "HTTP status " + response.getCode());
};
});
if (result instanceof DomainProbeResult.RedirectSameDomain_Internal(EdgeUrl redirUrl)) {
urls.add(redirUrl);
}
else {
return result;
}
// We don't have robots.txt yet, so we'll assume a request delay of 1 second
TimeUnit.SECONDS.sleep(1);
}
catch (SocketTimeoutException ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Timeout during domain probe");
}
catch (Exception ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Error during domain probe");
}
} }
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Failed to resolve domain root");
} }
/** Perform a HEAD request to fetch the content type of a URL. /** Perform a HEAD request to fetch the content type of a URL.
@@ -137,66 +287,72 @@ public class HttpFetcherImpl implements HttpFetcher {
* recorded in the WARC file on failure. * recorded in the WARC file on failure.
*/ */
public ContentTypeProbeResult probeContentType(EdgeUrl url, public ContentTypeProbeResult probeContentType(EdgeUrl url,
WarcRecorder warcRecorder, CrawlDelayTimer timer,
ContentTags tags) throws RateLimitException { ContentTags tags) {
if (tags.isEmpty() && contentTypeLogic.isUrlLikeBinary(url)) { if (!tags.isEmpty() || !contentTypeLogic.isUrlLikeBinary(url)) {
var headBuilder = new Request.Builder().head() return new ContentTypeProbeResult.NoOp();
.addHeader("User-agent", userAgentString) }
.addHeader("Accept-Encoding", "gzip")
.url(url.toString()); try {
ClassicHttpRequest head = ClassicRequestBuilder.head(url.asURI())
var head = headBuilder.build(); .addHeader("User-Agent", userAgentString)
var call = client.newCall(head); .addHeader("Accept-Encoding", "gzip")
.build();
try (var rsp = call.execute()) {
var contentTypeHeader = rsp.header("Content-type"); var result = SendLock.wrapSend(client, head, (rsp) -> {
EntityUtils.consume(rsp.getEntity());
if (contentTypeHeader != null && !contentTypeLogic.isAllowableContentType(contentTypeHeader)) {
warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.code()); int statusCode = rsp.getCode();
return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.code()); // Handle redirects
} if (statusCode == 301 || statusCode == 302 || statusCode == 307) {
var location = rsp.getFirstHeader("Location");
// Update the URL to the final URL of the HEAD request, otherwise we might end up doing if (location != null) {
Optional<EdgeUrl> newUrl = linkParser.parseLink(url, location.getValue());
// HEAD 301 url1 -> url2 if (newUrl.isEmpty())
// HEAD 200 url2 return new ContentTypeProbeResult.HttpError(statusCode, "Invalid location header on redirect");
// GET 301 url1 -> url2 return new ContentTypeProbeResult.Redirect(newUrl.get());
// GET 200 url2 }
}
// which is not what we want. Overall we want to do as few requests as possible to not raise
// too many eyebrows when looking at the logs on the target server. Overall it's probably desirable if (statusCode == 405) {
// that it looks like the traffic makes sense, as opposed to looking like a broken bot. // If we get a 405, we can't probe the content type with HEAD, so we'll just say it's ok
return new ContentTypeProbeResult.Ok(url);
var redirectUrl = new EdgeUrl(rsp.request().url().toString()); }
EdgeUrl ret;
// Handle errors
if (Objects.equals(redirectUrl.domain, url.domain)) ret = redirectUrl; if (statusCode < 200 || statusCode > 300) {
else ret = url; return new ContentTypeProbeResult.HttpError(statusCode, "Bad status code");
}
// Intercept rate limiting
if (rsp.code() == 429) { // Handle missing content type
throw new HttpFetcherImpl.RateLimitException(Objects.requireNonNullElse(rsp.header("Retry-After"), "1")); var ctHeader = rsp.getFirstHeader("Content-Type");
} if (ctHeader == null) {
return new ContentTypeProbeResult.HttpError(statusCode, "Missing Content-Type header");
return new ContentTypeProbeResult.Ok(ret); }
} var contentType = ctHeader.getValue();
catch (RateLimitException ex) {
throw ex; // Check if the content type is allowed
} if (contentTypeLogic.isAllowableContentType(contentType)) {
catch (InterruptedIOException ex) { return new ContentTypeProbeResult.Ok(url);
warcRecorder.flagAsTimeout(url); } else {
return new ContentTypeProbeResult.BadContentType(contentType, statusCode);
return new ContentTypeProbeResult.Timeout(ex); }
} catch (Exception ex) { });
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
return result;
warcRecorder.flagAsError(url, ex); }
catch (SocketTimeoutException ex) {
return new ContentTypeProbeResult.Exception(ex);
} return new ContentTypeProbeResult.Timeout(ex);
}
catch (Exception ex) {
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
return new ContentTypeProbeResult.Exception(ex);
}
finally {
timer.waitFetchDelay();
} }
return new ContentTypeProbeResult.Ok(url);
} }
/** Fetch the content of a URL, and record it in a WARC file, /** Fetch the content of a URL, and record it in a WARC file,
@@ -206,35 +362,75 @@ public class HttpFetcherImpl implements HttpFetcher {
@Override @Override
public HttpFetchResult fetchContent(EdgeUrl url, public HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder warcRecorder, WarcRecorder warcRecorder,
CrawlDelayTimer timer,
ContentTags contentTags, ContentTags contentTags,
ProbeType probeType) ProbeType probeType)
throws Exception
{ {
var getBuilder = new Request.Builder().get(); try {
if (probeType == HttpFetcher.ProbeType.FULL) {
try {
var probeResult = probeContentType(url, timer, contentTags);
logger.info(crawlerAuditMarker, "Probe result {} for {}", probeResult.getClass().getSimpleName(), url);
switch (probeResult) {
case HttpFetcher.ContentTypeProbeResult.NoOp():
break; //
case HttpFetcher.ContentTypeProbeResult.Ok(EdgeUrl resolvedUrl):
url = resolvedUrl; // If we were redirected while probing, use the final URL for fetching
break;
case ContentTypeProbeResult.BadContentType badContentType:
warcRecorder.flagAsFailedContentTypeProbe(url, badContentType.contentType(), badContentType.statusCode());
return new HttpFetchResult.ResultNone();
case ContentTypeProbeResult.BadContentType.Timeout(Exception ex):
warcRecorder.flagAsTimeout(url);
return new HttpFetchResult.ResultException(ex);
case ContentTypeProbeResult.Exception(Exception ex):
warcRecorder.flagAsError(url, ex);
return new HttpFetchResult.ResultException(ex);
case ContentTypeProbeResult.HttpError httpError:
return new HttpFetchResult.ResultException(new HttpException("HTTP status code " + httpError.statusCode() + ": " + httpError.message()));
case ContentTypeProbeResult.Redirect redirect:
return new HttpFetchResult.ResultRedirect(redirect.location());
}
} catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
getBuilder.url(url.toString())
.addHeader("Accept-Encoding", "gzip")
.addHeader("Accept-Language", "en,*;q=0.5")
.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8")
.addHeader("User-agent", userAgentString);
contentTags.paint(getBuilder);
HttpFetchResult result = warcRecorder.fetch(client, getBuilder.build());
if (result instanceof HttpFetchResult.ResultOk ok) {
if (ok.statusCode() == 429) {
throw new RateLimitException(Objects.requireNonNullElse(ok.header("Retry-After"), "1"));
} }
if (ok.statusCode() == 304) {
return new HttpFetchResult.Result304Raw(); ClassicRequestBuilder getBuilder = ClassicRequestBuilder.get(url.asURI())
} .addHeader("User-Agent", userAgentString)
if (ok.statusCode() == 200) { .addHeader("Accept-Encoding", "gzip")
return ok; .addHeader("Accept-Language", "en,*;q=0.5")
.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8");
contentTags.paint(getBuilder);
try (var sl = new SendLock()) {
HttpFetchResult result = warcRecorder.fetch(client, getBuilder.build());
if (result instanceof HttpFetchResult.ResultOk ok) {
if (ok.statusCode() == 304) {
return new HttpFetchResult.Result304Raw();
}
}
switch (result) {
case HttpFetchResult.ResultOk ok -> logger.info(crawlerAuditMarker, "Fetch result OK {} for {}", ok.statusCode(), url);
case HttpFetchResult.ResultRedirect redirect -> logger.info(crawlerAuditMarker, "Fetch result redirect: {} for {}", redirect.url(), url);
case HttpFetchResult.ResultNone none -> logger.info(crawlerAuditMarker, "Fetch result none for {}", url);
case HttpFetchResult.ResultException ex -> logger.error(crawlerAuditMarker, "Fetch result exception: {} for {}", ex.getClass().getSimpleName(), url);
case HttpFetchResult.Result304Raw raw -> logger.info(crawlerAuditMarker, "Fetch result: 304 Raw for {}", url);
case HttpFetchResult.Result304ReplacedWithReference ref -> logger.info(crawlerAuditMarker, "Fetch result: 304 With reference for {}", url);
}
return result;
} }
} }
catch (Exception ex) {
ex.printStackTrace();
return new HttpFetchResult.ResultException(ex);
}
return result;
} }
@Override @Override
@@ -242,6 +438,126 @@ public class HttpFetcherImpl implements HttpFetcher {
return new SitemapRetriever(); return new SitemapRetriever();
} }
/** Recursively fetch sitemaps */
@Override
public List<EdgeUrl> fetchSitemapUrls(String root, CrawlDelayTimer delayTimer) {
try {
List<EdgeUrl> ret = new ArrayList<>();
Set<String> seenUrls = new HashSet<>();
Set<String> seenSitemaps = new HashSet<>();
Deque<EdgeUrl> sitemapQueue = new LinkedList<>();
EdgeUrl rootSitemapUrl = new EdgeUrl(root);
sitemapQueue.add(rootSitemapUrl);
int fetchedSitemaps = 0;
while (!sitemapQueue.isEmpty() && ret.size() < 20_000 && ++fetchedSitemaps < 10) {
var head = sitemapQueue.removeFirst();
switch (fetchSingleSitemap(head)) {
case SitemapResult.SitemapUrls(List<String> urls) -> {
for (var url : urls) {
if (seenUrls.add(url)) {
EdgeUrl.parse(url)
.filter(u -> u.domain.equals(rootSitemapUrl.domain))
.ifPresent(ret::add);
}
}
}
case SitemapResult.SitemapReferences(List<String> refs) -> {
for (var ref : refs) {
if (seenSitemaps.add(ref)) {
EdgeUrl.parse(ref)
.filter(url -> url.domain.equals(rootSitemapUrl.domain))
.ifPresent(sitemapQueue::addFirst);
}
}
}
case SitemapResult.SitemapError() -> {}
}
delayTimer.waitFetchDelay();
}
return ret;
}
catch (Exception ex) {
logger.error("Error while fetching sitemaps via {}: {} ({})", root, ex.getClass().getSimpleName(), ex.getMessage());
return List.of();
}
}
private SitemapResult fetchSingleSitemap(EdgeUrl sitemapUrl) throws URISyntaxException, IOException, InterruptedException {
ClassicHttpRequest getRequest = ClassicRequestBuilder.get(sitemapUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.addHeader("Accept", "text/*, */*;q=0.9")
.addHeader("User-Agent", userAgentString)
.build();
try (var sl = new SendLock()) {
return client.execute(getRequest, response -> {
if (response.getCode() != 200) {
return new SitemapResult.SitemapError();
}
Document parsedSitemap = Jsoup.parse(
EntityUtils.toString(response.getEntity()),
sitemapUrl.toString(),
Parser.xmlParser()
);
if (parsedSitemap.childrenSize() == 0) {
return new SitemapResult.SitemapError();
}
String rootTagName = parsedSitemap.child(0).tagName();
return switch (rootTagName.toLowerCase()) {
case "sitemapindex" -> {
List<String> references = new ArrayList<>();
for (var locTag : parsedSitemap.getElementsByTag("loc")) {
references.add(locTag.text().trim());
}
yield new SitemapResult.SitemapReferences(Collections.unmodifiableList(references));
}
case "urlset" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("url > loc")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
case "rss", "atom" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("link, url")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
default -> new SitemapResult.SitemapError();
};
});
}
catch (Exception ex) {
logger.warn("Error while fetching sitemap {}: {} ({})", sitemapUrl, ex.getClass().getSimpleName(), ex.getMessage());
return new SitemapResult.SitemapError();
}
}
private sealed interface SitemapResult {
record SitemapUrls(List<String> urls) implements SitemapResult {}
record SitemapReferences(List<String> sitemapRefs) implements SitemapResult {}
record SitemapError() implements SitemapResult {}
}
@Override @Override
public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) { public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) {
var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder); var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder);
@@ -256,15 +572,15 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) { private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) {
try { try (var sl = new SendLock()) {
var getBuilder = new Request.Builder().get();
getBuilder.url(url.toString()) ClassicHttpRequest request = ClassicRequestBuilder.get(url.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip") .addHeader("Accept-Encoding", "gzip")
.addHeader("Accept", "text/*, */*;q=0.9") .addHeader("Accept", "text/*, */*;q=0.9")
.addHeader("User-agent", userAgentString); .build();
HttpFetchResult result = recorder.fetch(client, getBuilder.build()); HttpFetchResult result = recorder.fetch(client, request);
return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) -> return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) ->
robotsParser.parseContent(url.toString(), robotsParser.parseContent(url.toString(),
@@ -278,6 +594,56 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
} }
@Override
public boolean retryRequest(HttpRequest request, IOException exception, int executionCount, HttpContext context) {
if (exception instanceof SocketTimeoutException ex) {
return false;
}
return executionCount < 3;
}
@Override
public boolean retryRequest(HttpResponse response, int executionCount, HttpContext context) {
return switch (response.getCode()) {
case 500, 503 -> executionCount < 2;
case 429 -> executionCount < 3;
default -> false;
};
}
@Override
public TimeValue getRetryInterval(HttpRequest request, IOException exception, int executionCount, HttpContext context) {
return TimeValue.ofSeconds(1);
}
@Override
public TimeValue getRetryInterval(HttpResponse response, int executionCount, HttpContext context) {
int statusCode = response.getCode();
// Give 503 a bit more time
if (statusCode == 503) return TimeValue.ofSeconds(5);
if (statusCode == 429) {
// get the Retry-After header
String retryAfter = response.getFirstHeader("Retry-After").getValue();
if (retryAfter == null) {
return TimeValue.ofSeconds(2);
}
try {
int retryAfterTime = Integer.parseInt(retryAfter);
retryAfterTime = Math.clamp(retryAfterTime, 1, 5);
return TimeValue.ofSeconds(retryAfterTime);
} catch (NumberFormatException e) {
logger.warn("Invalid Retry-After header: {}", retryAfter);
}
}
return TimeValue.ofSeconds(2);
}
public static class RateLimitException extends Exception { public static class RateLimitException extends Exception {
private final String retryAfter; private final String retryAfter;
@@ -298,5 +664,31 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
} }
} }
}
class SendLock implements AutoCloseable {
private static final Semaphore maxConcurrentRequests = new Semaphore(Integer.getInteger("crawler.maxConcurrentRequests", 512));
boolean closed = false;
public SendLock() {
maxConcurrentRequests.acquireUninterruptibly();
}
public static <T> T wrapSend(HttpClient client, final ClassicHttpRequest request,
final HttpClientResponseHandler<? extends T> responseHandler) throws IOException {
try (var lock = new SendLock()) {
return client.execute(request, responseHandler);
}
}
@Override
public void close() {
if (!closed) {
maxConcurrentRequests.release();
closed = true;
}
}
} }

View File

@@ -1,31 +0,0 @@
package nu.marginalia.crawl.fetcher.socket;
import okhttp3.Interceptor;
import okhttp3.Response;
import org.jetbrains.annotations.NotNull;
import java.io.IOException;
/** An interceptor that intercepts network requests and adds the remote IP address as
* a header in the response. This is used to pass the remote IP address to the Warc
* writer, as this information is not available in the response.
*/
public class IpInterceptingNetworkInterceptor implements Interceptor {
private static final String pseudoHeaderName = "X-Marginalia-Remote-IP";
@NotNull
@Override
public Response intercept(@NotNull Interceptor.Chain chain) throws IOException {
String IP = chain.connection().socket().getInetAddress().getHostAddress();
return chain.proceed(chain.request())
.newBuilder()
.addHeader(pseudoHeaderName, IP)
.build();
}
public static String getIpFromResponse(Response response) {
return response.header(pseudoHeaderName);
}
}

View File

@@ -27,7 +27,7 @@ public class NoSecuritySSL {
} }
}; };
public static SSLSocketFactory buildSocketFactory() { public static SSLContext buildSslContext() {
try { try {
// Install the all-trusting trust manager // Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("TLS"); final SSLContext sslContext = SSLContext.getInstance("TLS");
@@ -40,14 +40,11 @@ public class NoSecuritySSL {
clientSessionContext.setSessionCacheSize(2048); clientSessionContext.setSessionCacheSize(2048);
// Create a ssl socket factory with our all-trusting manager // Create a ssl socket factory with our all-trusting manager
return sslContext.getSocketFactory(); return sslContext;
} }
catch (Exception e) { catch (Exception e) {
throw new RuntimeException(e); throw new RuntimeException(e);
} }
} }
public static HostnameVerifier buildHostnameVerifyer() {
return (hn, session) -> true;
}
} }

View File

@@ -1,15 +1,19 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Headers; import org.apache.commons.io.IOUtils;
import okhttp3.Response;
import org.apache.commons.io.input.BOMInputStream; import org.apache.commons.io.input.BOMInputStream;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.Header;
import org.netpreserve.jwarc.WarcTruncationReason; import org.netpreserve.jwarc.WarcTruncationReason;
import java.io.*; import java.io.*;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.Objects; import java.time.Duration;
import java.util.zip.GZIPInputStream; import java.time.Instant;
import java.util.Arrays;
import static nu.marginalia.crawl.fetcher.warc.ErrorBuffer.suppressContentEncoding;
/** Input buffer for temporary storage of a HTTP response /** Input buffer for temporary storage of a HTTP response
* This may be in-memory or on-disk, at the discretion of * This may be in-memory or on-disk, at the discretion of
@@ -17,8 +21,9 @@ import java.util.zip.GZIPInputStream;
* */ * */
public abstract class WarcInputBuffer implements AutoCloseable { public abstract class WarcInputBuffer implements AutoCloseable {
protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED; protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED;
protected Headers headers; protected Header[] headers;
WarcInputBuffer(Headers headers) {
WarcInputBuffer(Header[] headers) {
this.headers = headers; this.headers = headers;
} }
@@ -30,7 +35,7 @@ public abstract class WarcInputBuffer implements AutoCloseable {
public final WarcTruncationReason truncationReason() { return truncationReason; } public final WarcTruncationReason truncationReason() { return truncationReason; }
public final Headers headers() { return headers; } public final Header[] headers() { return headers; }
/** Create a buffer for a response. /** Create a buffer for a response.
* If the response is small and not compressed, it will be stored in memory. * If the response is small and not compressed, it will be stored in memory.
@@ -38,33 +43,37 @@ public abstract class WarcInputBuffer implements AutoCloseable {
* and suppressed from the headers. * and suppressed from the headers.
* If an error occurs, a buffer will be created with no content and an error status. * If an error occurs, a buffer will be created with no content and an error status.
*/ */
static WarcInputBuffer forResponse(Response rsp) { static WarcInputBuffer forResponse(ClassicHttpResponse response, Duration timeLimit) throws IOException {
if (rsp == null) if (response == null)
return new ErrorBuffer(); return new ErrorBuffer();
try {
String contentLengthHeader = Objects.requireNonNullElse(rsp.header("Content-Length"), "-1");
int contentLength = Integer.parseInt(contentLengthHeader);
String contentEncoding = rsp.header("Content-Encoding");
if (contentEncoding == null && contentLength > 0 && contentLength < 8192) { var entity = response.getEntity();
if (null == entity) {
return new ErrorBuffer();
}
InputStream is = entity.getContent();
long length = entity.getContentLength();
try (response) {
if (length > 0 && length < 8192) {
// If the content is small and not compressed, we can just read it into memory // If the content is small and not compressed, we can just read it into memory
return new MemoryBuffer(rsp, contentLength); return new MemoryBuffer(response.getHeaders(), timeLimit, is, (int) length);
} } else {
else {
// Otherwise, we unpack it into a file and read it from there // Otherwise, we unpack it into a file and read it from there
return new FileBuffer(rsp); return new FileBuffer(response.getHeaders(), timeLimit, is);
} }
} }
catch (Exception ex) {
return new ErrorBuffer(rsp);
}
} }
/** Copy an input stream to an output stream, with a maximum size and time limit */ /** Copy an input stream to an output stream, with a maximum size and time limit */
protected void copy(InputStream is, OutputStream os) { protected void copy(InputStream is, OutputStream os, Duration timeLimit) {
long startTime = System.currentTimeMillis(); Instant start = Instant.now();
Instant timeout = start.plus(timeLimit);
long size = 0; long size = 0;
byte[] buffer = new byte[8192]; byte[] buffer = new byte[8192];
@@ -74,24 +83,106 @@ public abstract class WarcInputBuffer implements AutoCloseable {
while (true) { while (true) {
try { try {
int n = is.read(buffer); Duration remaining = Duration.between(Instant.now(), timeout);
if (n < 0) break; if (remaining.isNegative()) {
size += n;
os.write(buffer, 0, n);
if (size > WarcRecorder.MAX_SIZE) {
truncationReason = WarcTruncationReason.LENGTH;
break;
}
if (System.currentTimeMillis() - startTime > WarcRecorder.MAX_TIME) {
truncationReason = WarcTruncationReason.TIME; truncationReason = WarcTruncationReason.TIME;
break; break;
} }
int n = is.read(buffer);
if (n < 0) break;
size += n;
// Even if we've exceeded the max length,
// we keep consuming the stream up until the end or a timeout,
// as closing the stream means resetting the connection, and
// that's generally not desirable.
if (size < WarcRecorder.MAX_SIZE) {
os.write(buffer, 0, n);
}
else if (truncationReason != WarcTruncationReason.LENGTH) {
truncationReason = WarcTruncationReason.LENGTH;
}
} catch (IOException e) { } catch (IOException e) {
throw new RuntimeException(e); truncationReason = WarcTruncationReason.UNSPECIFIED;
} }
} }
// Try to close the connection as long as we haven't timed out.
// As per Apache HttpClient's semantics, this will reset the connection
// and close the stream if we have timed out.
if (truncationReason != WarcTruncationReason.TIME) {
IOUtils.closeQuietly(is);
}
}
/** Takes a Content-Range header and checks if it is complete.
* A complete range is one that covers the entire resource.
* For example, "bytes 0-1023/2048" or "bytes 0-1023/*" are complete ranges.
* "bytes 0-1023/2048" is not a complete range.
*/
public boolean isRangeComplete(Header[] headers) {
// Find the Content-Range header
String contentRangeHeader = null;
for (var header : headers) {
if ("Content-Range".equalsIgnoreCase(header.getName())) {
contentRangeHeader = header.getValue();
break;
}
}
// Return true if header is null or empty
if (contentRangeHeader == null || contentRangeHeader.isEmpty()) {
return true;
}
try {
// Content-Range format: "bytes range-start-range-end/size"
// e.g., "bytes 0-1023/2048" or "bytes 0-1023/*"
// Get the part after "bytes "
String[] parts = contentRangeHeader.split(" ", 2);
if (parts.length < 2) {
return false;
}
// Get the range and size parts (e.g., "0-1023/2048")
String rangeAndSize = parts[1];
String[] rangeAndSizeParts = rangeAndSize.split("/", 2);
if (rangeAndSizeParts.length < 2) {
return false;
}
// Get the range (e.g., "0-1023")
String range = rangeAndSizeParts[0];
String[] rangeParts = range.split("-", 2);
if (rangeParts.length < 2) {
return false;
}
// Get the size (e.g., "2048" or "*")
String size = rangeAndSizeParts[1];
// If size is "*", we don't know the total size, so return false
if ("*".equals(size)) {
return false;
}
// Parse as long to handle large files
long rangeStart = Long.parseLong(rangeParts[0]);
long rangeEnd = Long.parseLong(rangeParts[1]);
long totalSize = Long.parseLong(size);
// Check if the range covers the entire resource
return rangeStart == 0 && rangeEnd == totalSize - 1;
} catch (NumberFormatException | ArrayIndexOutOfBoundsException e) {
return false;
}
} }
} }
@@ -99,12 +190,8 @@ public abstract class WarcInputBuffer implements AutoCloseable {
/** Pseudo-buffer for when we have an error */ /** Pseudo-buffer for when we have an error */
class ErrorBuffer extends WarcInputBuffer { class ErrorBuffer extends WarcInputBuffer {
public ErrorBuffer() { public ErrorBuffer() {
super(Headers.of()); super(new Header[0]);
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
public ErrorBuffer(Response rsp) {
super(rsp.headers());
truncationReason = WarcTruncationReason.UNSPECIFIED; truncationReason = WarcTruncationReason.UNSPECIFIED;
} }
@@ -120,17 +207,29 @@ class ErrorBuffer extends WarcInputBuffer {
@Override @Override
public void close() throws Exception {} public void close() throws Exception {}
static Header[] suppressContentEncoding(Header[] headers) {
return Arrays.stream(headers).filter(header -> !"Content-Encoding".equalsIgnoreCase(header.getName())).toArray(Header[]::new);
}
} }
/** Buffer for when we have the response in memory */ /** Buffer for when we have the response in memory */
class MemoryBuffer extends WarcInputBuffer { class MemoryBuffer extends WarcInputBuffer {
byte[] data; byte[] data;
public MemoryBuffer(Response response, int size) { public MemoryBuffer(Header[] headers, Duration timeLimit, InputStream responseStream, int size) {
super(response.headers()); super(suppressContentEncoding(headers));
if (!isRangeComplete(headers)) {
truncationReason = WarcTruncationReason.LENGTH;
} else {
truncationReason = WarcTruncationReason.NOT_TRUNCATED;
}
var outputStream = new ByteArrayOutputStream(size); var outputStream = new ByteArrayOutputStream(size);
copy(response.body().byteStream(), outputStream); copy(responseStream, outputStream, timeLimit);
data = outputStream.toByteArray(); data = outputStream.toByteArray();
} }
@@ -154,53 +253,25 @@ class MemoryBuffer extends WarcInputBuffer {
class FileBuffer extends WarcInputBuffer { class FileBuffer extends WarcInputBuffer {
private final Path tempFile; private final Path tempFile;
public FileBuffer(Response response) throws IOException { public FileBuffer(Header[] headers, Duration timeLimit, InputStream responseStream) throws IOException {
super(suppressContentEncoding(response.headers())); super(suppressContentEncoding(headers));
if (!isRangeComplete(headers)) {
truncationReason = WarcTruncationReason.LENGTH;
} else {
truncationReason = WarcTruncationReason.NOT_TRUNCATED;
}
this.tempFile = Files.createTempFile("rsp", ".html"); this.tempFile = Files.createTempFile("rsp", ".html");
if (response.body() == null) { try (var out = Files.newOutputStream(tempFile)) {
truncationReason = WarcTruncationReason.DISCONNECT; copy(responseStream, out, timeLimit);
return;
} }
catch (Exception ex) {
if ("gzip".equals(response.header("Content-Encoding"))) { truncationReason = WarcTruncationReason.UNSPECIFIED;
try (var out = Files.newOutputStream(tempFile)) {
copy(new GZIPInputStream(response.body().byteStream()), out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
}
else {
try (var out = Files.newOutputStream(tempFile)) {
copy(response.body().byteStream(), out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
} }
} }
private static Headers suppressContentEncoding(Headers headers) {
var builder = new Headers.Builder();
headers.toMultimap().forEach((k, values) -> {
if ("Content-Encoding".equalsIgnoreCase(k)) {
return;
}
if ("Transfer-Encoding".equalsIgnoreCase(k)) {
return;
}
for (var value : values) {
builder.add(k, value);
}
});
return builder.build();
}
public InputStream read() throws IOException { public InputStream read() throws IOException {
return Files.newInputStream(tempFile); return Files.newInputStream(tempFile);
} }

View File

@@ -1,11 +1,14 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Protocol;
import okhttp3.Response;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.Header;
import java.net.URI; import java.net.URI;
import java.net.URLEncoder; import java.net.URLEncoder;
import java.net.http.HttpClient;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.util.*; import java.util.*;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -16,7 +19,7 @@ import java.util.stream.Collectors;
public class WarcProtocolReconstructor { public class WarcProtocolReconstructor {
static String getHttpRequestString(String method, static String getHttpRequestString(String method,
Map<String, List<String>> mainHeaders, Header[] mainHeaders,
Map<String, List<String>> extraHeaders, Map<String, List<String>> extraHeaders,
URI uri) { URI uri) {
StringBuilder requestStringBuilder = new StringBuilder(); StringBuilder requestStringBuilder = new StringBuilder();
@@ -33,12 +36,13 @@ public class WarcProtocolReconstructor {
Set<String> addedHeaders = new HashSet<>(); Set<String> addedHeaders = new HashSet<>();
mainHeaders.forEach((k, values) -> { for (var header : mainHeaders) {
for (var value : values) { String k = header.getName();
addedHeaders.add(k); String v = header.getValue();
requestStringBuilder.append(capitalizeHeader(k)).append(": ").append(value).append("\r\n");
} addedHeaders.add(k);
}); requestStringBuilder.append(capitalizeHeader(k)).append(": ").append(v).append("\r\n");
}
extraHeaders.forEach((k, values) -> { extraHeaders.forEach((k, values) -> {
if (!addedHeaders.contains(k)) { if (!addedHeaders.contains(k)) {
@@ -75,17 +79,23 @@ public class WarcProtocolReconstructor {
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n"; return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
} }
static String getResponseHeader(Response response, long size) { static String getResponseHeader(HttpResponse<?> response, long size) {
String version = response.protocol() == Protocol.HTTP_1_1 ? "1.1" : "2.0"; String version = response.version() == HttpClient.Version.HTTP_1_1 ? "1.1" : "2.0";
String statusCode = String.valueOf(response.code()); String statusCode = String.valueOf(response.statusCode());
String statusMessage = STATUS_CODE_MAP.getOrDefault(response.code(), "Unknown"); String statusMessage = STATUS_CODE_MAP.getOrDefault(response.statusCode(), "Unknown");
String headerString = getHeadersAsString(response, size); String headerString = getHeadersAsString(response.headers(), size);
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n"; return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
} }
static String getResponseHeader(ClassicHttpResponse response, long size) {
String headerString = getHeadersAsString(response.getHeaders(), size);
return response.getVersion().format() + " " + response.getCode() + " " + response.getReasonPhrase() + "\r\n" + headerString + "\r\n\r\n";
}
private static final Map<Integer, String> STATUS_CODE_MAP = Map.ofEntries( private static final Map<Integer, String> STATUS_CODE_MAP = Map.ofEntries(
Map.entry(200, "OK"), Map.entry(200, "OK"),
Map.entry(201, "Created"), Map.entry(201, "Created"),
@@ -148,10 +158,41 @@ public class WarcProtocolReconstructor {
return joiner.toString(); return joiner.toString();
} }
static private String getHeadersAsString(Response response, long responseSize) {
static private String getHeadersAsString(Header[] headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n"); StringJoiner joiner = new StringJoiner("\r\n");
response.headers().toMultimap().forEach((k, values) -> { for (var header : headers) {
String headerCapitalized = capitalizeHeader(header.getName());
// Omit pseudoheaders injected by the crawler itself
if (headerCapitalized.startsWith("X-Marginalia"))
continue;
// Omit Transfer-Encoding and Content-Encoding headers
if (headerCapitalized.equals("Transfer-Encoding"))
continue;
if (headerCapitalized.equals("Content-Encoding"))
continue;
// Since we're transparently decoding gzip, we need to update the Content-Length header
// to reflect the actual size of the response body. We'll do this at the end.
if (headerCapitalized.equals("Content-Length"))
continue;
joiner.add(headerCapitalized + ": " + header.getValue());
}
joiner.add("Content-Length: " + responseSize);
return joiner.toString();
}
static private String getHeadersAsString(HttpHeaders headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n");
headers.map().forEach((k, values) -> {
String headerCapitalized = capitalizeHeader(k); String headerCapitalized = capitalizeHeader(k);
// Omit pseudoheaders injected by the crawler itself // Omit pseudoheaders injected by the crawler itself
@@ -179,8 +220,8 @@ public class WarcProtocolReconstructor {
return joiner.toString(); return joiner.toString();
} }
// okhttp gives us flattened headers, so we need to reconstruct Camel-Kebab-Case style // okhttp gave us flattened headers, so we need to reconstruct Camel-Kebab-Case style
// for the WARC parser's sake... // for the WARC parser's sake... (do we still need this, mr chesterton?)
static private String capitalizeHeader(String k) { static private String capitalizeHeader(String k) {
return Arrays.stream(StringUtils.split(k, '-')) return Arrays.stream(StringUtils.split(k, '-'))
.map(StringUtils::capitalize) .map(StringUtils::capitalize)

View File

@@ -1,13 +1,17 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor; import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import okhttp3.OkHttpClient; import org.apache.hc.client5.http.classic.HttpClient;
import okhttp3.Request; import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.apache.hc.client5.http.cookie.CookieStore;
import org.apache.hc.core5.http.ClassicHttpRequest;
import org.apache.hc.core5.http.NameValuePair;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.netpreserve.jwarc.*; import org.netpreserve.jwarc.*;
import org.slf4j.Logger; import org.slf4j.Logger;
@@ -16,18 +20,20 @@ import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.io.InputStream; import java.io.InputStream;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.SocketTimeoutException;
import java.net.URI; import java.net.URI;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
import java.time.Duration;
import java.time.Instant; import java.time.Instant;
import java.util.*; import java.util.*;
/** Based on JWarc's fetch method, APL 2.0 license /** Based on JWarc's fetch method, APL 2.0 license
* <p></p> * <p></p>
* This class wraps OkHttp's OkHttpClient and records the HTTP request and response in a WARC file, * This class wraps HttpClient and records the HTTP request and response in a WARC file,
* as best is possible given not all the data is available at the same time and needs to * as best is possible given not all the data is available at the same time and needs to
* be reconstructed. * be reconstructed.
*/ */
@@ -47,20 +53,23 @@ public class WarcRecorder implements AutoCloseable {
// Affix a version string in case we need to change the format in the future // Affix a version string in case we need to change the format in the future
// in some way // in some way
private final String warcRecorderVersion = "1.0"; private final String warcRecorderVersion = "1.0";
private final CookieStore cookies;
// We need to know if the site uses cookies so this can be reported among the search results private final LinkParser linkParser = new LinkParser();
// -- flip this to true if we see any cookies. This information will also be painted on any
// revisited pages. It's not 100% perfect and a bit order dependent, but it's good enough.
private final WarcXCookieInformationHeader cookieInformation = new WarcXCookieInformationHeader();
/** /**
* Create a new WarcRecorder that will write to the given file * Create a new WarcRecorder that will write to the given file
* *
* @param warcFile The file to write to * @param warcFile The file to write to
*/ */
public WarcRecorder(Path warcFile) throws IOException { public WarcRecorder(Path warcFile, HttpFetcherImpl fetcher) throws IOException {
this.warcFile = warcFile; this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile); this.writer = new WarcWriter(warcFile);
this.cookies = fetcher.getCookies();
}
public WarcRecorder(Path warcFile, CookieStore cookies) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = cookies;
} }
/** /**
@@ -70,112 +79,177 @@ public class WarcRecorder implements AutoCloseable {
public WarcRecorder() throws IOException { public WarcRecorder() throws IOException {
this.warcFile = Files.createTempFile("warc", ".warc.gz"); this.warcFile = Files.createTempFile("warc", ".warc.gz");
this.writer = new WarcWriter(this.warcFile); this.writer = new WarcWriter(this.warcFile);
this.cookies = new BasicCookieStore();
temporaryFile = true; temporaryFile = true;
} }
public HttpFetchResult fetch(OkHttpClient client, Request request) throws NoSuchAlgorithmException, private boolean hasCookies() {
IOException, return !cookies.getCookies().isEmpty();
URISyntaxException, }
InterruptedException
public HttpFetchResult fetch(HttpClient client,
ClassicHttpRequest request)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{ {
URI requestUri = request.url().uri(); return fetch(client, request, Duration.ofMillis(MAX_TIME));
}
public HttpFetchResult fetch(HttpClient client,
ClassicHttpRequest request,
Duration timeout)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{
URI requestUri = request.getUri();
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
String ip;
Instant date = Instant.now(); Instant date = Instant.now();
var call = client.newCall(request); // Not entirely sure why we need to do this, but keeping it due to Chesterton's Fence
Map<String, List<String>> extraHeaders = new HashMap<>(request.getHeaders().length);
cookieInformation.update(client, request.url()); // Inject a range header to attempt to limit the size of the response
// to the maximum size we want to store, if the server supports it.
request.addHeader("Range", "bytes=0-"+MAX_SIZE);
try (var response = call.execute(); try {
WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response)) return client.execute(request, response -> {
{
byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length); try (WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response, timeout);
InputStream inputStream = inputBuffer.read(); InputStream inputStream = inputBuffer.read()) {
ip = IpInterceptingNetworkInterceptor.getIpFromResponse(response); // Build and write the request
responseDataBuffer.put(responseHeaders); WarcDigestBuilder requestDigestBuilder = new WarcDigestBuilder();
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
int dataStart = responseDataBuffer.pos(); byte[] httpRequestString = WarcProtocolReconstructor
.getHttpRequestString(
request.getMethod(),
request.getHeaders(),
extraHeaders,
requestUri)
.getBytes();
for (;;) { requestDigestBuilder.update(httpRequestString);
int remainingLength = responseDataBuffer.remaining();
if (remainingLength == 0)
break;
int startPos = responseDataBuffer.pos(); WarcRequest warcRequest = new WarcRequest.Builder(requestUri)
.blockDigest(requestDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_REQUEST, httpRequestString)
.build();
int n = responseDataBuffer.readFrom(inputStream, remainingLength); warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
if (n < 0) writer.write(warcRequest);
break;
responseDataBuffer.updateDigest(responseDigestBuilder, startPos, n); if (hasCookies()) {
responseDataBuffer.updateDigest(payloadDigestBuilder, startPos, n); extraHeaders.put("X-Has-Cookies", List.of("1"));
} }
// It looks like this might be the same as requestUri, but it's not; byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
// it's the URI after resolving redirects.
final URI responseUri = response.request().url().uri();
WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri) ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length);
.blockDigest(responseDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(responseBuilder); responseDataBuffer.put(responseHeaders);
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
if (ip != null) responseBuilder.ipAddress(InetAddress.getByName(ip)); int dataStart = responseDataBuffer.pos();
responseBuilder.payloadDigest(payloadDigestBuilder.build()); for (;;) {
responseBuilder.truncated(inputBuffer.truncationReason()); int remainingLength = responseDataBuffer.remaining();
if (remainingLength == 0)
break;
// Build and write the response int startPos = responseDataBuffer.pos();
var warcResponse = responseBuilder.build(); int n = responseDataBuffer.readFrom(inputStream, remainingLength);
warcResponse.http(); // force HTTP header to be parsed before body is consumed so that caller can use it if (n < 0)
writer.write(warcResponse); break;
// Build and write the request responseDataBuffer.updateDigest(responseDigestBuilder, startPos, n);
responseDataBuffer.updateDigest(payloadDigestBuilder, startPos, n);
}
WarcDigestBuilder requestDigestBuilder = new WarcDigestBuilder(); // with some http client libraries, that resolve redirects transparently, this might be different
// from the request URI, but currently we don't have transparent redirect resolution so it's always
// the same (though let's keep the variables separate in case this changes)
final URI responseUri = requestUri;
byte[] httpRequestString = WarcProtocolReconstructor WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri)
.getHttpRequestString( .blockDigest(responseDigestBuilder.build())
response.request().method(), .date(date)
response.request().headers().toMultimap(), .concurrentTo(warcRequest.id())
request.headers().toMultimap(), .body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
requestUri)
.getBytes();
requestDigestBuilder.update(httpRequestString); InetAddress inetAddress = InetAddress.getByName(responseUri.getHost());
responseBuilder.ipAddress(inetAddress);
responseBuilder.payloadDigest(payloadDigestBuilder.build());
responseBuilder.truncated(inputBuffer.truncationReason());
WarcRequest warcRequest = new WarcRequest.Builder(requestUri) // Build and write the response
.blockDigest(requestDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_REQUEST, httpRequestString)
.concurrentTo(warcResponse.id())
.build();
warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it var warcResponse = responseBuilder.build();
writer.write(warcRequest); warcResponse.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcResponse);
return new HttpFetchResult.ResultOk(responseUri, if (Duration.between(date, Instant.now()).compareTo(Duration.ofSeconds(9)) > 0
response.code(), && inputBuffer.size() < 2048
inputBuffer.headers(), && !requestUri.getPath().endsWith("robots.txt")) // don't bail on robots.txt
ip, {
responseDataBuffer.data, // Fast detection and mitigation of crawler traps that respond with slow
dataStart, // small responses, with a high branching factor
responseDataBuffer.length() - dataStart);
} // Note we bail *after* writing the warc records, this will effectively only
catch (Exception ex) { // prevent link extraction from the document.
logger.warn("URL {} took too long to fetch ({}s) and was too small for the effort ({}b)",
requestUri,
Duration.between(date, Instant.now()).getSeconds(),
inputBuffer.size()
);
return new HttpFetchResult.ResultException(new IOException("Likely crawler trap"));
}
if (response.getCode() == 301 || response.getCode() == 302 || response.getCode() == 307) {
// If the server responds with a redirect, we need to
// update the request URI to the new location
EdgeUrl redirectLocation = Optional.ofNullable(response.getFirstHeader("Location"))
.map(NameValuePair::getValue)
.flatMap(location -> linkParser.parseLink(new EdgeUrl(requestUri), location))
.orElse(null);
if (redirectLocation != null) {
// If the redirect location is a valid URL, we need to update the request URI
return new HttpFetchResult.ResultRedirect(redirectLocation);
} else {
// If the redirect location is not a valid URL, we need to throw an exception
return new HttpFetchResult.ResultException(new IOException("Invalid redirect location: " + response.getFirstHeader("Location")));
}
}
return new HttpFetchResult.ResultOk(responseUri,
response.getCode(),
inputBuffer.headers(),
inetAddress.getHostAddress(),
responseDataBuffer.data,
dataStart,
responseDataBuffer.length() - dataStart);
} catch (Exception ex) {
flagAsError(new EdgeUrl(requestUri), ex); // write a WARC record to indicate the error
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex);
}
});
// the client.execute() method will throw an exception if the request times out
// or on other IO exceptions, so we need to catch those here as well as having
// exception handling in the response handler
} catch (SocketTimeoutException ex) {
flagAsTimeout(new EdgeUrl(requestUri)); // write a WARC record to indicate the timeout
return new HttpFetchResult.ResultException(ex);
} catch (IOException ex) {
flagAsError(new EdgeUrl(requestUri), ex); // write a WARC record to indicate the error
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage()); logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex); return new HttpFetchResult.ResultException(ex);
} }
@@ -185,7 +259,7 @@ public class WarcRecorder implements AutoCloseable {
writer.write(item); writer.write(item);
} }
private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags contentTags) { private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags contentTags) {
try { try {
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
@@ -195,7 +269,7 @@ public class WarcRecorder implements AutoCloseable {
if (documentBody == null) { if (documentBody == null) {
bytes = new byte[0]; bytes = new byte[0];
} else { } else {
bytes = documentBody.getBytes(); bytes = documentBody;
} }
// Create a synthesis of custom headers and the original headers // Create a synthesis of custom headers and the original headers
@@ -246,7 +320,9 @@ public class WarcRecorder implements AutoCloseable {
.date(Instant.now()) .date(Instant.now())
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes()); .body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(builder); if (hasCookies()) {
builder.addHeader("X-Has-Cookies", "1");
}
var reference = builder.build(); var reference = builder.build();
@@ -264,7 +340,7 @@ public class WarcRecorder implements AutoCloseable {
* an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this * an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this
* scenario we want to record the data as it was in the previous crawl, but not re-fetch it. * scenario we want to record the data as it was in the previous crawl, but not re-fetch it.
*/ */
public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags ctags) { public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags ctags) {
saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags); saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags);
} }
@@ -285,6 +361,9 @@ public class WarcRecorder implements AutoCloseable {
case HttpFetcherImpl.DomainProbeResult.Ok ok: case HttpFetcherImpl.DomainProbeResult.Ok ok:
fields.put("X-WARC-Probe-Status", List.of("OK")); fields.put("X-WARC-Probe-Status", List.of("OK"));
break; break;
case HttpFetcher.DomainProbeResult.RedirectSameDomain_Internal redirectSameDomain:
fields.put("X-WARC-Probe-Status", List.of("REDIR-INTERNAL"));
break;
} }
var warcinfo = new Warcinfo.Builder() var warcinfo = new Warcinfo.Builder()

View File

@@ -44,6 +44,14 @@ public class DomainLocks {
return new Semaphore(2); return new Semaphore(2);
} }
public boolean canLock(EdgeDomain domain) {
Semaphore sem = locks.get(domain.topDomain.toLowerCase());
if (null == sem)
return true;
else
return sem.availablePermits() > 0;
}
public static class DomainLock implements AutoCloseable { public static class DomainLock implements AutoCloseable {
private final String domainName; private final String domainName;
private final Semaphore semaphore; private final Semaphore semaphore;

View File

@@ -4,6 +4,7 @@ import nu.marginalia.ContentTypes;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.lsh.EasyLSH; import nu.marginalia.lsh.EasyLSH;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -11,54 +12,76 @@ import javax.annotation.Nullable;
import java.io.IOException; import java.io.IOException;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.Iterator;
import java.util.Objects;
import java.util.Optional;
/** A reference to a domain that has been crawled before. */ /** A reference to a domain that has been crawled before. */
public class CrawlDataReference implements AutoCloseable { public class CrawlDataReference implements AutoCloseable, Iterable<CrawledDocument> {
private boolean closed = false;
@Nullable
private final Path path;
@Nullable
private SerializableCrawlDataStream data = null;
private final SerializableCrawlDataStream data;
private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class); private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class);
public CrawlDataReference(SerializableCrawlDataStream data) { public CrawlDataReference(@Nullable Path path) {
this.data = data; this.path = path;
} }
public CrawlDataReference() { public CrawlDataReference() {
this(SerializableCrawlDataStream.empty()); this(null);
} }
/** Delete the associated data from disk, if it exists */ /** Delete the associated data from disk, if it exists */
public void delete() throws IOException { public void delete() throws IOException {
Path filePath = data.path(); if (path != null) {
Files.deleteIfExists(path);
if (filePath != null) {
Files.deleteIfExists(filePath);
} }
} }
/** Get the next document from the crawl data, public @NotNull Iterator<CrawledDocument> iterator() {
* returning null when there are no more documents
* available
*/
@Nullable
public CrawledDocument nextDocument() {
try {
while (data.hasNext()) {
if (data.next() instanceof CrawledDocument doc) {
if (!ContentTypes.isAccepted(doc.contentType))
continue;
return doc; requireStream();
// Guaranteed by requireStream, but helps java
Objects.requireNonNull(data);
return data.map(next -> {
if (next instanceof CrawledDocument doc && ContentTypes.isAccepted(doc.contentType)) {
return Optional.of(doc);
}
else {
return Optional.empty();
}
});
}
/** After calling this method, data is guaranteed to be non-null */
private void requireStream() {
if (closed) {
throw new IllegalStateException("Use after close()");
}
if (data == null) {
try {
if (path != null) {
data = SerializableCrawlDataStream.openDataStream(path);
return;
} }
} }
} catch (Exception ex) {
catch (IOException ex) { logger.error("Failed to open stream", ex);
logger.error("Failed to read next document", ex); }
}
return null; data = SerializableCrawlDataStream.empty();
}
} }
public static boolean isContentBodySame(String one, String other) { public static boolean isContentBodySame(byte[] one, byte[] other) {
final long contentHashOne = contentHash(one); final long contentHashOne = contentHash(one);
final long contentHashOther = contentHash(other); final long contentHashOther = contentHash(other);
@@ -66,7 +89,7 @@ public class CrawlDataReference implements AutoCloseable {
return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4; return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4;
} }
private static long contentHash(String content) { private static long contentHash(byte[] content) {
EasyLSH hash = new EasyLSH(); EasyLSH hash = new EasyLSH();
int next = 0; int next = 0;
@@ -74,8 +97,8 @@ public class CrawlDataReference implements AutoCloseable {
// In a naive best-effort fashion, extract the text // In a naive best-effort fashion, extract the text
// content of the document and feed it into the LSH // content of the document and feed it into the LSH
for (int i = 0; i < content.length(); i++) { for (byte b : content) {
char c = content.charAt(i); char c = (char) b;
if (c == '<') { if (c == '<') {
isInTag = true; isInTag = true;
} else if (c == '>') { } else if (c == '>') {
@@ -98,7 +121,12 @@ public class CrawlDataReference implements AutoCloseable {
} }
@Override @Override
public void close() throws Exception { public void close() throws IOException {
data.close(); if (!closed) {
if (data != null) {
data.close();
}
closed = true;
}
} }
} }

Some files were not shown because too many files have changed in this diff Show More