1
1
mirror of https://github.com/MarginaliaSearch/MarginaliaSearch.git synced 2025-10-06 07:32:38 +02:00

Compare commits

...

101 Commits

Author SHA1 Message Date
Viktor Lofgren
e84d5c497a (crawler) Reduce the likelihood of crawler tasks locking on domains before they are ready
Change to a bounded queue and adding a sleep to reduce the amount of effectively busy looping threads.
2025-04-21 00:39:26 +02:00
Viktor Lofgren
2d2d3e2466 (crawler) Reduce the likelihood of crawler tasks locking on domains before they are ready
Change to a bounded queue and adding a sleep to reduce the amount of effectively busy looping threads.
2025-04-21 00:36:48 +02:00
Viktor Lofgren
647dd9b12f (crawler) Reduce the likelihood of crawler tasks locking on domains before they are ready 2025-04-21 00:24:30 +02:00
Viktor Lofgren
de4e2849ce (crawler) Tweak request retry counts
Increase the default number of tries to 3, but don't retry on SSL errors as they are unlikely to fix themselves in the short term.
2025-04-19 00:19:48 +02:00
Viktor Lofgren
3c43f1954e (crawler) Add custom cookie store implementation
Apache HttpClient's cookie implementation builds an enormous concurrent hashmap with every cookie for every domain ever crawled.  This is a big waste of resources.

Replacing it with a fairly crude domain-isolated instance, as we are primarily interested in answering whether a cookie is set, and we will never retain cookies long term.
2025-04-18 13:04:22 +02:00
Viktor Lofgren
fa2462ec39 (crawler) Re-enable aborts on timeout 2025-04-18 12:59:34 +02:00
Viktor Lofgren
f4ad7145db (crawler) Disable SO_LINGER 2025-04-18 01:42:02 +02:00
Viktor Lofgren
068b450180 (crawler) Temporarily disable request.abort() 2025-04-18 01:25:56 +02:00
Viktor Lofgren
05b909a21f (crawler) Add logging to get more info about connection leak 2025-04-18 01:06:52 +02:00
Viktor Lofgren
3d179cddce (crawler) Correctly consume entity in sitemap retrieval 2025-04-18 00:32:21 +02:00
Viktor Lofgren
1a2aae496a (crawler) Correct handling and abortion of HttpClient's requests
There was a resource leak in the initial implementation of the Apache HttpClient WarcInputBuffer that failed to free up resources.

Using HttpGet objects instead of the Classic...Request objects, as the latter fail to expose an abort()-method.
2025-04-18 00:16:26 +02:00
Viktor Lofgren
353cdffb3f (crawler) Increase connection request timeout, restore congestion timeout 2025-04-17 21:32:06 +02:00
Viktor Lofgren
2e3f1313c7 (crawler) Log exceptions while crawling in crawler audit log 2025-04-17 21:18:09 +02:00
Viktor Lofgren
58e6f141ce (crawler) Reduce congestion throttle go-rate 2025-04-17 20:36:58 +02:00
Viktor Lofgren
500f63e921 (crawler) Lower max conn per route 2025-04-17 18:36:16 +02:00
Viktor Lofgren
6dfbedda1e (crawler) Increase max conn per route and connection timeout 2025-04-17 18:31:46 +02:00
Viktor Lofgren
9715ddb105 (crawler) Increase max pool size to a large value 2025-04-17 18:22:58 +02:00
Viktor Lofgren
1fc6313a77 (crawler) Remove log noise when retrying a bad URL 2025-04-17 17:10:46 +02:00
Viktor Lofgren
b1249d5b8a (crawler) Fix broken test. 2025-04-17 17:01:42 +02:00
Viktor
ef95d59b07 Merge pull request #161 from MarginaliaSearch/apache-httpclient-in-crawler
The previously used Java HttpClient seems unsuitable for crawler usage, that lead to issues like send()-operations sometimes hanging forever, with clunky workarounds such as running each send operation in a separate Future that can be cancelled on a timeout.

The most damning flaw is that it does not offer socket timeouts. If a server responds in a timely manner, but for some reason between high load or malice stops sending data, Java's builtin HttpClient will hang forever.

It simply has too many assumptions that break, and fails to adequately expose the inner workings of the connection pool to a degree that makes it possible to configure in a satisfactory manner, such as setting a SO_LINGER value or limiting the number of concurrent connections to a host.

Apache's HttpClient solves all these problems.

The change also includes a new battery of tests for the HttpFetcher, and refactors the retriever class a bit to move stuff into the HttpFetcher, leading to a better separation of concerns.

The crawler will also be a bit more clever when fetching documents, and attempt to use range queries where supported to limit the number of bytes, as interrupting connections is undesirable and leads to connection storms and bufferbloat.
2025-04-17 16:57:19 +02:00
Viktor Lofgren
acdd8664f5 (crawler) More logging for the crawler, in a separate file. 2025-04-17 16:55:50 +02:00
Viktor Lofgren
6b12eac58a (crawler) Fix crawler retriever test to use the slop format 2025-04-17 16:35:13 +02:00
Viktor Lofgren
bb3f1f395a (crawler) Fix bug where headers were not stored correctly
This was the result of refactoring to Apache HttpClient.
2025-04-17 16:34:41 +02:00
Viktor Lofgren
b661beef41 (crawler) Amend recrawl logic to match redirects as being unchanged if their Location is the same. 2025-04-17 16:34:05 +02:00
Viktor Lofgren
9888c47f19 (crawler) Add custom Keep-Alive settings for HttpClient with max keep-alive of 30s 2025-04-17 15:25:46 +02:00
Viktor Lofgren
dcef7e955b (crawler) Try to avoid unnecessary connection resets
In order to keep connections alive, the crawler will consume data past it's max size (but hope and pray the server supports range queries) as long as we've not exceeded the timeout.

This permits us to keep the connection alive in more scenarios, which is helpful for the health of the network stack, as constant TCP handshakes can lead to quite a lot of buffer bloat.

This will increase the bandwidth requirements in some scenarios, but on the other hand, it will increase the available bandwidth as well.
2025-04-17 14:51:33 +02:00
Viktor Lofgren
b3973a1dd7 (crawler) Remove unnecessary crawl delay when not ct-probing
The crawler would *always* incur the crawl delay penalty associated with content type probing, even if it wasn't actually probing.  Removing this delay when we are not probing.
2025-04-17 14:39:04 +02:00
Viktor Lofgren
8bd05d6d90 (crawler) Attempt to use range queries where available
This might help in some circumstances to avoid fetching more data than we are interested in.
2025-04-17 14:37:55 +02:00
Viktor Lofgren
59df8e356e (crawler) Do not fail domain and content type probe on 405
Some endpoints do not support the HEAD method.  This has historically broken the crawler when it attempts to use HEAD to probe certain URLs that are suspected of being e.g. binary.

The change makes it so that we bypass the probing on 405 instead, and for the domain probe logic, we switch to a small range queried GET.
2025-04-17 13:54:28 +02:00
Viktor Lofgren
7161162a35 (crawler) Write WARC records in a sane order 2025-04-17 13:36:39 +02:00
Viktor Lofgren
d7c4c5141f (crawler) Migrate to Apache HttpClient for crawler
The previously used Java HttpClient seems unsuitable for crawler usage,
that lead to issues like send()-operations sometimes hanging forever,
with clunky workarounds such as running each send operation in a separate
Future that can be cancelled on a timeout.

It has too many assumptions that break, and fails to adequately expose
the inner workings of the connection pool to a degree that makes it possible
to configure in a satisfactory manner.

Apache's HttpClient solves all these problems.

The change also includes a new battery of tests for the HttpFetcher,
and refactors the retriever class a bit to move stuff into the HttpFetcher,
leading to a better separation of concerns.
2025-04-17 12:51:08 +02:00
Viktor Lofgren
88e9b8fb05 (crawler) Throttle the establishment of new connections
To avoid network congestion from the packet storm created when establishing hundreds or thousands of connections at the same time, pace the opening of new connections.
2025-04-08 22:53:02 +02:00
Viktor Lofgren
b6265cee11 (feeds) Add timeout code to send()
Due to the unique way java's HttpClient implements timeouts, we must always wrap it in an executor to catch the scenario that a server stops sending data mid-response, which would otherwise hang the send method forever.
2025-04-08 22:09:59 +02:00
Viktor Lofgren
c91af247e9 (rate-limit) Fix rate limiting logic
The rate limiter was misconfigured to regenerate tokens at a fixed rate of 1 per refillRate; not refillRate per minute.  Additionally increasing the default bucket size to 4x refill rate.
2025-04-05 12:26:26 +02:00
Viktor Lofgren
7a31227de1 (crawler) Filter out robots.txt-sitemaps that belong to different domains 2025-04-02 13:35:37 +02:00
Viktor Lofgren
4f477604c5 (crawler) Improve error handling in parquet->slop conversion
Parquet code throws a RuntimeException, which was not correctly caught, leading to a failure to crawl.
2025-04-02 13:16:01 +02:00
Viktor Lofgren
2970f4395b (minor) Test code cleanup 2025-04-02 13:16:01 +02:00
Viktor Lofgren
d1ec909b36 (crawler) Improve handling of timeouts to prevent crawler from getting stuck 2025-04-02 12:57:21 +02:00
Viktor Lofgren
c67c5bbf42 (crawler) Experimentally drop to HTTP 1.1 for crawler to see if this solves stuck send()s 2025-04-01 12:05:21 +02:00
Viktor Lofgren
ecb0e57a1a (crawler) Make the use of virtual threads in the crawler configurable via system properties 2025-03-27 21:26:05 +01:00
Viktor Lofgren
8c61f61b46 (crawler) Add crawling metadata to domainstate db 2025-03-27 16:38:37 +01:00
Viktor Lofgren
662a18c933 Revert "(crawler) Further rearrange crawl order"
This reverts commit 1c2426a052.

The change does not appear necessary to avoid problems.
2025-03-27 11:25:08 +01:00
Viktor Lofgren
1c2426a052 (crawler) Further rearrange crawl order
Limit crawl order preferrence to edu domains, to avoid hitting stuff like medium and wordpress with shotgun requests.
2025-03-27 11:19:20 +01:00
Viktor Lofgren
34df7441ac (crawler) Add some jitter to crawl delay to avoid accidentally synchronized requests 2025-03-27 11:15:16 +01:00
Viktor Lofgren
5387e2bd80 (crawler) Adjust crawl order to get a better mixture of domains 2025-03-27 11:12:48 +01:00
Viktor Lofgren
0f3b24d0f8 (crawler) Evaluate virtual threads for the crawler
The change also alters SimpleBlockingThreadPool to add the option to use virtual threads instead of platform threads.
2025-03-27 11:02:21 +01:00
Viktor Lofgren
a732095d2a (crawler) Improve crawl task ordering
Further improve the ordering of the crawl tasks in order to ensure that potentially blocking tasks are enqueued as soon as possible.
2025-03-26 16:51:37 +01:00
Viktor Lofgren
6607f0112f (crawler) Improve how the crawler deals with interruptions
In some cases, it threads would previously fail to terminate when interrupted.
2025-03-26 16:19:57 +01:00
Viktor Lofgren
4913730de9 (jdk) Upgrade to Java 24 2025-03-26 13:26:06 +01:00
Viktor Lofgren
1db64f9d56 (chore) Fix zookeeper test by upgrading zk image version.
Test suddenly broke due to the increasing entropy of the universe.
2025-03-26 11:47:14 +01:00
Viktor Lofgren
4dcff14498 (search) Improve contrast with light mode 2025-03-25 13:15:31 +01:00
Viktor Lofgren
426658f64e (search) Improve contrast with light mode 2025-03-25 11:54:54 +01:00
Viktor Lofgren
2181b22f05 (crawler) Change default maxConcurrentRequests to 512
This seems like a more sensible default after testing a bit.  May need local tuning.
2025-03-22 12:11:09 +01:00
Viktor Lofgren
42bd79a609 (crawler) Experimentally throttle the number of active retrievals to see how this affects the network performance
There's been some indications that request storms lead to buffer bloat and bad throughput.

This adds a configurable semaphore, by default permitting 100 active requests.
2025-03-22 11:50:37 +01:00
Viktor Lofgren
b91c1e528a (favicon) Send dummy svg result when image is missing
This prevents the browser from rendering a "broken image" in this scenario.
2025-03-21 15:15:14 +01:00
Viktor Lofgren
b1130d7a04 (domainstatedb) Allow creation of disconnected db
This is required for executor services that do not have crawl data to still be able to initialize.
2025-03-21 14:59:36 +01:00
Viktor Lofgren
8364bcdc97 (favicon) Add favicons to the matchograms 2025-03-21 14:30:40 +01:00
Viktor Lofgren
626cab5fab (favicon) Add favicon to site overview 2025-03-21 14:15:23 +01:00
Viktor Lofgren
cfd4712191 (favicon) Add capability for fetching favicons 2025-03-21 13:38:58 +01:00
Viktor Lofgren
9f18ced73d (crawler) Improve deferred task behavior 2025-03-18 12:54:18 +01:00
Viktor Lofgren
18e91269ab (crawler) Improve deferred task behavior 2025-03-18 12:25:22 +01:00
Viktor Lofgren
e315ca5758 (search) Change icon for small web filter
The previous icon was of an irregular size and shifted the layout in an unaesthetic way.
2025-03-17 12:07:34 +01:00
Viktor Lofgren
3ceea17c1d (search) Adjustments to devicd detection in CSS
Use pointer:fine media query to better distinguish between mobile devices and PCs with a window in portrait orientation.

With this, we never show mobile filtering functionality on mobile; and never show the touch-inaccessible minimized sidebar on mobile.
2025-03-17 12:04:34 +01:00
Viktor Lofgren
b34527c1a3 (search) Add small web filter for new UI 2025-03-17 11:39:19 +01:00
Viktor Lofgren
185bf28fca (crawler) Correct issue leading to parquet files not being correctly preconverted
Path.endsWith("str") != String.endsWith(".str")
2025-03-10 13:48:12 +01:00
Viktor Lofgren
78cc25584a (crawler) Add error logging when entering bad path for historical crawl data 2025-03-10 13:38:40 +01:00
Viktor Lofgren
62ba30bacf (common) Log info about metrics server 2025-03-10 13:12:39 +01:00
Viktor Lofgren
3bb84eb206 (common) Log info about metrics server 2025-03-10 13:03:48 +01:00
Viktor Lofgren
be7d13ccce (crawler) Correct task execution logic in crawler
The old behavior would flag domains as pending too soon, leading to them being omitted from execution if they were not immediately available to run.
2025-03-09 13:47:51 +01:00
Viktor Lofgren
8c088a7c0b (crawler) Remove custom thread factory
This was causing issues, and not really doing much of benefit.
2025-03-09 11:50:52 +01:00
Viktor Lofgren
ea9a642b9b (crawler) More effective task scheduling in the crawler
This should hopefully allow more threads to be busy
2025-03-09 11:44:59 +01:00
Viktor Lofgren
27f528af6a (search) Fix "Remove Javascript" toggle
A bug was introduced at some point where the special keyword for filtering on javascript was changed to special:scripts, from js:true/js:false.

Solves issue #155
2025-02-28 12:03:04 +01:00
Viktor Lofgren
20ca41ec95 (processed model) Use String columns instead of Txt columns for SlopDocumentRecord
It's very likely TxtStringColumn is the culprit of the bug seen in https://github.com/MarginaliaSearch/MarginaliaSearch/issues/154 where the wrong URL was shown for a search result.
2025-02-24 11:41:51 +01:00
Viktor Lofgren
7671f0d9e4 (search) Display message when no search results are found 2025-02-24 11:15:55 +01:00
Viktor Lofgren
44d6bc71b7 (assistant) Migrate to Jooby framework 2025-02-15 13:28:12 +01:00
Viktor Lofgren
9d302e2973 (assistant) Migrate to Jooby framework 2025-02-15 13:26:04 +01:00
Viktor Lofgren
f553701224 (assistant) Migrate to Jooby framework 2025-02-15 13:21:48 +01:00
Viktor Lofgren
f076d05595 (deps) Upgrade slf4j to latest 2025-02-15 12:50:16 +01:00
Viktor Lofgren
b513809710 (*) Stopgap fix for metrics server initialization errors bringing down services 2025-02-14 17:09:48 +01:00
Viktor Lofgren
7519b28e21 (search) Correct exception from misbehaving bots feeding invalid urls 2025-02-14 17:05:24 +01:00
Viktor Lofgren
3eac4dd57f (search) Correct exception in error handler when page is missing 2025-02-14 17:00:21 +01:00
Viktor Lofgren
4c2810720a (search) Add redirect handler for full URLs in the /site endpoint 2025-02-14 16:31:11 +01:00
Viktor Lofgren
8480ba8daa (live-capture) Code cleanup 2025-02-04 14:05:36 +01:00
Viktor Lofgren
fbba392491 (live-capture) Send a UA-string from the browserless fetcher as well
The change also introduces a somewhat convoluted wiremock test to intercept and verify that these headers are in fact sent
2025-02-04 13:36:49 +01:00
Viktor Lofgren
530eb35949 (update-rss) Do not fail the feed fetcher control actor if it takes a long time to complete. 2025-02-03 11:35:32 +01:00
Viktor Lofgren
c2dd2175a2 (search) Add new query expansion rule contracting WORD NUM pairs into WORD-NUM and WORDNUM 2025-02-01 13:13:30 +01:00
Viktor Lofgren
b8581b0f56 (crawler) Safe sanitization of headers during warc->slop conversion
The warc->slop converter was rejecting some items because they had headers that were representable in the Warc code's MessageHeader map implementation, but illegal in the HttpHeaders' implementation.

Fixing this by manually filtering these out.  Ostensibly the constructor has a filtering predicate, but this annoyingly runs too late and fails to prevent the problem.
2025-01-31 12:47:42 +01:00
Viktor Lofgren
2ea34767d8 (crawler) Use the response URL when resolving relative links
The crawler was incorrectly using the request URL as the base URL when resolving relative links.  This caused problems when encountering redirects.

 For example if we fetch /log, redirecting to  /log/ and find links to foo/, and bar/; these would resolve to /foo and /bar, and not /log/foo and /log/bar.
2025-01-31 12:40:13 +01:00
Viktor Lofgren
e9af838231 (actor) Fix migration actor final steps 2025-01-30 11:48:21 +01:00
Viktor Lofgren
ae0cad47c4 (actor) Utility method for getting a json prototype for actor states
If we can hook this into the control gui somehow, it'll make for a nice QOL upgrade when manually interacting with the actors.
2025-01-29 15:20:25 +01:00
Viktor Lofgren
5fbc8ef998 (misc) Tidying 2025-01-29 15:17:04 +01:00
Viktor Lofgren
32c6dd9e6a (actor) Delete old data in the migration actor 2025-01-29 14:51:46 +01:00
Viktor Lofgren
6ece6a6cfb (actor) Improve resilience for the migration actor 2025-01-29 14:43:09 +01:00
Viktor Lofgren
39cd1c18f8 Automatically run npm install tailwindcss@3 via setup.sh, as the new default version of the package is incompatible with the project 2025-01-29 12:21:08 +01:00
Viktor
eb65daaa88 Merge pull request #151 from Lionstiger/master
fix small grammar error in footerLegal.jte
2025-01-28 21:49:50 +01:00
Viktor
0bebdb6e33 Merge branch 'master' into master 2025-01-28 21:49:36 +01:00
Viktor Lofgren
1e50e392c6 (actor) Improve logging and error handling for data migration actor 2025-01-28 15:34:36 +01:00
Viktor Lofgren
fb673de370 (crawler) Change the header 'User-agent' to 'User-Agent' 2025-01-28 15:34:16 +01:00
Viktor Lofgren
eee73ab16c (crawler) Be more lenient when performing a domain probe 2025-01-28 15:24:30 +01:00
Viktor Lofgren
5354e034bf (search) Minor grammar fix 2025-01-27 18:36:31 +01:00
Magnus Wulf
72384ad6ca fix small grammar error 2025-01-27 15:04:57 +01:00
110 changed files with 3300 additions and 1011 deletions

View File

@@ -43,12 +43,11 @@ subprojects.forEach {it ->
}
ext {
jvmVersion=23
dockerImageBase='container-registry.oracle.com/graalvm/jdk:23'
jvmVersion = 24
dockerImageBase='container-registry.oracle.com/graalvm/jdk:24'
dockerImageTag='latest'
dockerImageRegistry='marginalia'
jibVersion = '3.4.4'
}
idea {

View File

@@ -24,58 +24,4 @@ public class LanguageModels {
this.fasttextLanguageModel = fasttextLanguageModel;
this.segments = segments;
}
public static LanguageModelsBuilder builder() {
return new LanguageModelsBuilder();
}
public static class LanguageModelsBuilder {
private Path termFrequencies;
private Path openNLPSentenceDetectionData;
private Path posRules;
private Path posDict;
private Path fasttextLanguageModel;
private Path segments;
LanguageModelsBuilder() {
}
public LanguageModelsBuilder termFrequencies(Path termFrequencies) {
this.termFrequencies = termFrequencies;
return this;
}
public LanguageModelsBuilder openNLPSentenceDetectionData(Path openNLPSentenceDetectionData) {
this.openNLPSentenceDetectionData = openNLPSentenceDetectionData;
return this;
}
public LanguageModelsBuilder posRules(Path posRules) {
this.posRules = posRules;
return this;
}
public LanguageModelsBuilder posDict(Path posDict) {
this.posDict = posDict;
return this;
}
public LanguageModelsBuilder fasttextLanguageModel(Path fasttextLanguageModel) {
this.fasttextLanguageModel = fasttextLanguageModel;
return this;
}
public LanguageModelsBuilder segments(Path segments) {
this.segments = segments;
return this;
}
public LanguageModels build() {
return new LanguageModels(this.termFrequencies, this.openNLPSentenceDetectionData, this.posRules, this.posDict, this.fasttextLanguageModel, this.segments);
}
public String toString() {
return "LanguageModels.LanguageModelsBuilder(termFrequencies=" + this.termFrequencies + ", openNLPSentenceDetectionData=" + this.openNLPSentenceDetectionData + ", posRules=" + this.posRules + ", posDict=" + this.posDict + ", fasttextLanguageModel=" + this.fasttextLanguageModel + ", segments=" + this.segments + ")";
}
}
}

View File

@@ -22,6 +22,7 @@ public class DbDomainQueries {
private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class);
private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<EdgeDomain, DomainIdWithNode> domainWithNodeCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<Integer, EdgeDomain> domainNameCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<String, List<DomainWithNode>> siblingsCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
@@ -59,6 +60,34 @@ public class DbDomainQueries {
}
}
public DomainIdWithNode getDomainIdWithNode(EdgeDomain domain) throws NoSuchElementException {
try {
return domainWithNodeCache.get(domain, () -> {
try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery();
if (rsp.next()) {
return new DomainIdWithNode(rsp.getInt(1), rsp.getInt(2));
}
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException();
});
}
catch (UncheckedExecutionException ex) {
throw new NoSuchElementException();
}
catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause());
}
}
public OptionalInt tryGetDomainId(EdgeDomain domain) {
Integer maybeId = domainIdCache.getIfPresent(domain);
@@ -145,4 +174,6 @@ public class DbDomainQueries {
return nodeAffinity > 0;
}
}
public record DomainIdWithNode (int domainId, int nodeAffinity) { }
}

View File

@@ -14,7 +14,7 @@ public class EdgeDomain implements Serializable {
@Nonnull
public final String topDomain;
public EdgeDomain(String host) {
public EdgeDomain(@Nonnull String host) {
Objects.requireNonNull(host, "domain name must not be null");
host = host.toLowerCase();
@@ -61,6 +61,10 @@ public class EdgeDomain implements Serializable {
this.topDomain = topDomain;
}
public static String getTopDomain(String host) {
return new EdgeDomain(host).topDomain;
}
private boolean looksLikeGovTld(String host) {
if (host.length() < 8)
return false;
@@ -116,24 +120,6 @@ public class EdgeDomain implements Serializable {
return topDomain.substring(0, cutPoint).toLowerCase();
}
public String getLongDomainKey() {
StringBuilder ret = new StringBuilder();
int cutPoint = topDomain.indexOf('.');
if (cutPoint < 0) {
ret.append(topDomain);
} else {
ret.append(topDomain, 0, cutPoint);
}
if (!subDomain.isEmpty() && !"www".equals(subDomain)) {
ret.append(":");
ret.append(subDomain);
}
return ret.toString().toLowerCase();
}
/** If possible, try to provide an alias domain,
* i.e. a domain name that is very likely to link to this one
* */

View File

@@ -10,7 +10,9 @@ import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.LocalDateTime;
import java.util.*;
import java.util.HashSet;
import java.util.Optional;
import java.util.Set;
import java.util.function.Function;
/** WorkLog is a journal of work done by a process,
@@ -61,6 +63,12 @@ public class WorkLog implements AutoCloseable, Closeable {
return new WorkLoadIterable<>(logFile, mapper);
}
public static int countEntries(Path crawlerLog) throws IOException{
try (var linesStream = Files.lines(crawlerLog)) {
return (int) linesStream.filter(WorkLogEntry::isJobId).count();
}
}
// Use synchro over concurrent set to avoid competing writes
// - correct is better than fast here, it's sketchy enough to use
// a PrintWriter

View File

@@ -6,6 +6,7 @@ import nu.marginalia.service.ServiceId;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.InetAddress;
import java.net.NetworkInterface;
import java.util.Enumeration;
@@ -115,11 +116,12 @@ public class ServiceConfigurationModule extends AbstractModule {
}
}
public static String getLocalNetworkIP() throws Exception {
public static String getLocalNetworkIP() throws IOException {
Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces();
while (nets.hasMoreElements()) {
NetworkInterface netif = nets.nextElement();
logger.info("Considering network interface {}: Up? {}, Loopback? {}", netif.getDisplayName(), netif.isUp(), netif.isLoopback());
if (!netif.isUp() || netif.isLoopback()) {
continue;
}
@@ -127,6 +129,7 @@ public class ServiceConfigurationModule extends AbstractModule {
Enumeration<InetAddress> inetAddresses = netif.getInetAddresses();
while (inetAddresses.hasMoreElements()) {
InetAddress addr = inetAddresses.nextElement();
logger.info("Considering address {}: SiteLocal? {}, Loopback? {}", addr.getHostAddress(), addr.isSiteLocalAddress(), addr.isLoopbackAddress());
if (addr.isSiteLocalAddress() && !addr.isLoopbackAddress()) {
return addr.getHostAddress();
}

View File

@@ -15,6 +15,7 @@ import org.slf4j.LoggerFactory;
import org.slf4j.Marker;
import org.slf4j.MarkerFactory;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;
@@ -106,9 +107,12 @@ public class JoobyService {
config.externalAddress());
// FIXME: This won't work outside of docker, may need to submit a PR to jooby to allow classpaths here
jooby.install(new JteModule(Path.of("/app/resources/jte"), Path.of("/app/classes/jte-precompiled")));
jooby.assets("/*", Paths.get("/app/resources/static"));
if (Files.exists(Path.of("/app/resources/jte")) || Files.exists(Path.of("/app/classes/jte-precompiled"))) {
jooby.install(new JteModule(Path.of("/app/resources/jte"), Path.of("/app/classes/jte-precompiled")));
}
if (Files.exists(Path.of("/app/resources/static"))) {
jooby.assets("/*", Paths.get("/app/resources/static"));
}
var options = new ServerOptions();
options.setHost(config.bindAddress());
options.setPort(restEndpoint.port());

View File

@@ -6,25 +6,36 @@ import nu.marginalia.service.module.ServiceConfiguration;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.InetSocketAddress;
public class MetricsServer {
private static final Logger logger = LoggerFactory.getLogger(MetricsServer.class);
@Inject
public MetricsServer(ServiceConfiguration configuration) throws Exception {
public MetricsServer(ServiceConfiguration configuration) {
// If less than zero, we forego setting up a metrics server
if (configuration.metricsPort() < 0)
return;
Server server = new Server(new InetSocketAddress(configuration.bindAddress(), configuration.metricsPort()));
try {
Server server = new Server(new InetSocketAddress(configuration.bindAddress(), configuration.metricsPort()));
ServletContextHandler context = new ServletContextHandler();
context.setContextPath("/");
server.setHandler(context);
ServletContextHandler context = new ServletContextHandler();
context.setContextPath("/");
server.setHandler(context);
context.addServlet(new ServletHolder(new MetricsServlet()), "/metrics");
context.addServlet(new ServletHolder(new MetricsServlet()), "/metrics");
server.start();
logger.info("MetricsServer listening on {}:{}", configuration.bindAddress(), configuration.metricsPort());
server.start();
}
catch (Exception|NoSuchMethodError ex) {
logger.error("Failed to set up metrics server", ex);
}
}
}

View File

@@ -35,21 +35,8 @@ public class RateLimiter {
}
public static RateLimiter forExpensiveRequest() {
return new RateLimiter(5, 10);
}
public static RateLimiter custom(int perMinute) {
return new RateLimiter(perMinute, 60);
}
public static RateLimiter forSpamBots() {
return new RateLimiter(120, 3600);
}
public static RateLimiter forLogin() {
return new RateLimiter(3, 15);
return new RateLimiter(4 * perMinute, perMinute);
}
private void cleanIdleBuckets() {
@@ -62,7 +49,7 @@ public class RateLimiter {
}
private Bucket createBucket() {
var refill = Refill.greedy(1, Duration.ofSeconds(refillRate));
var refill = Refill.greedy(refillRate, Duration.ofSeconds(60));
var bw = Bandwidth.classic(capacity, refill);
return Bucket.builder().addLimit(bw).build();
}

View File

@@ -5,6 +5,7 @@
<Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters>
</Console>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
@@ -13,9 +14,20 @@
<Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters>
<SizeBasedTriggeringPolicy size="10MB" />
</RollingFile>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/crawler-audit-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/crawler-audit-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
ignoreExceptions="false">
<PatternLayout>
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS}: %msg{nolookups}%n</Pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100MB" />
<Filters>
<MarkerFilter marker="CRAWLER" onMatch="ALLOW" onMismatch="DENY" />
</Filters>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="org.apache.zookeeper" level="WARN" />

View File

@@ -5,6 +5,7 @@
<Filters>
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters>
</Console>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/wmsa-${sys:service-name}-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
@@ -17,6 +18,17 @@
<MarkerFilter marker="PROCESS" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="QUERY" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="HTTP" onMatch="DENY" onMismatch="NEUTRAL" />
<MarkerFilter marker="CRAWLER" onMatch="DENY" onMismatch="NEUTRAL" />
</Filters>
</RollingFile>
<RollingFile name="LogToFile" fileName="${env:WMSA_LOG_DIR:-/var/log/wmsa}/crawler-audit-${env:WMSA_SERVICE_NODE:-0}.log" filePattern="/var/log/wmsa/crawler-audit-${env:WMSA_SERVICE_NODE:-0}-log-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz"
ignoreExceptions="false">
<PatternLayout>
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS}: %msg{nolookups}%n</Pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100MB" />
<Filters>
<MarkerFilter marker="CRAWLER" onMatch="ALLOW" onMismatch="DENY" />
</Filters>
</RollingFile>
</Appenders>

View File

@@ -25,7 +25,7 @@ import static org.mockito.Mockito.when;
class ZkServiceRegistryTest {
private static final int ZOOKEEPER_PORT = 2181;
private static final GenericContainer<?> zookeeper =
new GenericContainer<>("zookeeper:3.8.0")
new GenericContainer<>("zookeeper:3.8")
.withExposedPorts(ZOOKEEPER_PORT);
List<ZkServiceRegistry> registries = new ArrayList<>();

View File

@@ -14,6 +14,8 @@ import nu.marginalia.mq.persistence.MqPersistence;
import nu.marginalia.nodecfg.NodeConfigurationService;
import nu.marginalia.nodecfg.model.NodeProfile;
import nu.marginalia.service.module.ServiceConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;
import java.time.LocalDateTime;
@@ -29,6 +31,7 @@ public class UpdateRssActor extends RecordActorPrototype {
private final NodeConfigurationService nodeConfigurationService;
private final MqPersistence persistence;
private static final Logger logger = LoggerFactory.getLogger(UpdateRssActor.class);
@Inject
public UpdateRssActor(Gson gson,
@@ -101,8 +104,8 @@ public class UpdateRssActor extends RecordActorPrototype {
case UpdateRefresh(int count, long msgId) -> {
MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12));
if (msg == null) {
// Retry the update
yield new Error("Failed to update feeds: message not found");
logger.warn("UpdateRefresh is taking a very long time");
yield new UpdateRefresh(count, msgId);
} else if (msg.state() != MqMessageState.OK) {
// Retry the update
yield new Error("Failed to update feeds: " + msg.state());
@@ -119,8 +122,8 @@ public class UpdateRssActor extends RecordActorPrototype {
case UpdateClean(long msgId) -> {
MqMessage msg = persistence.waitForMessageTerminalState(msgId, Duration.ofSeconds(10), Duration.ofHours(12));
if (msg == null) {
// Retry the update
yield new Error("Failed to update feeds: message not found");
logger.warn("UpdateClean is taking a very long time");
yield new UpdateClean(msgId);
} else if (msg.state() != MqMessageState.OK) {
// Retry the update
yield new Error("Failed to update feeds: " + msg.state());

View File

@@ -8,6 +8,7 @@ import nu.marginalia.actor.state.ActorStep;
import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.process.log.WorkLog;
import nu.marginalia.process.log.WorkLogEntry;
import nu.marginalia.service.control.ServiceHeartbeat;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage;
@@ -18,6 +19,7 @@ import org.slf4j.LoggerFactory;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.util.Map;
import java.util.Optional;
import java.util.function.Function;
@@ -26,14 +28,15 @@ import java.util.function.Function;
public class MigrateCrawlDataActor extends RecordActorPrototype {
private final FileStorageService fileStorageService;
private final ServiceHeartbeat serviceHeartbeat;
private static final Logger logger = LoggerFactory.getLogger(MigrateCrawlDataActor.class);
@Inject
public MigrateCrawlDataActor(Gson gson, FileStorageService fileStorageService) {
public MigrateCrawlDataActor(Gson gson, FileStorageService fileStorageService, ServiceHeartbeat serviceHeartbeat) {
super(gson);
this.fileStorageService = fileStorageService;
this.serviceHeartbeat = serviceHeartbeat;
}
public record Run(long fileStorageId) implements ActorStep {}
@@ -49,33 +52,50 @@ public class MigrateCrawlDataActor extends RecordActorPrototype {
Path crawlerLog = root.resolve("crawler.log");
Path newCrawlerLog = Files.createTempFile(root, "crawler", ".migrate.log");
try (WorkLog workLog = new WorkLog(newCrawlerLog)) {
int totalEntries = WorkLog.countEntries(crawlerLog);
try (WorkLog workLog = new WorkLog(newCrawlerLog);
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Migrating")
) {
int entryIdx = 0;
for (Map.Entry<WorkLogEntry, Path> item : WorkLog.iterableMap(crawlerLog, new CrawlDataLocator(root))) {
var entry = item.getKey();
var path = item.getValue();
final WorkLogEntry entry = item.getKey();
final Path inputPath = item.getValue();
logger.info("Converting {}", entry.id());
Path outputPath = inputPath;
heartbeat.progress("Migrating" + inputPath.getFileName(), entryIdx++, totalEntries);
if (path.toFile().getName().endsWith(".parquet")) {
if (inputPath.toString().endsWith(".parquet")) {
String domain = entry.id();
String id = Integer.toHexString(domain.hashCode());
Path outputFile = CrawlerOutputFile.createSlopPath(root, id, domain);
outputPath = CrawlerOutputFile.createSlopPath(root, id, domain);
SlopCrawlDataRecord.convertFromParquet(path, outputFile);
if (Files.exists(inputPath)) {
try {
SlopCrawlDataRecord.convertFromParquet(inputPath, outputPath);
Files.deleteIfExists(inputPath);
} catch (Exception ex) {
outputPath = inputPath; // don't update the work log on error
logger.error("Failed to convert " + inputPath, ex);
}
}
else if (!Files.exists(inputPath) && !Files.exists(outputPath)) {
// if the input file is missing, and the output file is missing, we just write the log
// record identical to the old one
outputPath = inputPath;
}
}
workLog.setJobToFinished(entry.id(), outputFile.toString(), entry.cnt());
}
else {
workLog.setJobToFinished(entry.id(), path.toString(), entry.cnt());
}
// Write a log entry for the (possibly) converted file
workLog.setJobToFinished(entry.id(), outputPath.toString(), entry.cnt());
}
}
Path oldCrawlerLog = Files.createTempFile(root, "crawler-", ".migrate.old.log");
Files.move(crawlerLog, oldCrawlerLog);
Files.move(crawlerLog, oldCrawlerLog, StandardCopyOption.REPLACE_EXISTING);
Files.move(newCrawlerLog, crawlerLog);
yield new End();

View File

@@ -0,0 +1,47 @@
plugins {
id 'java'
id "com.google.protobuf" version "0.9.4"
id 'jvm-test-suite'
}
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(rootProject.ext.jvmVersion))
}
}
jar.archiveBaseName = 'favicon-api'
apply from: "$rootProject.projectDir/protobuf.gradle"
apply from: "$rootProject.projectDir/srcsets.gradle"
dependencies {
implementation project(':code:common:model')
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.notnull
implementation libs.guava
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation libs.gson
implementation libs.bundles.protobuf
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
exclude group: 'com.google.guava'
}
}
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
}

View File

@@ -0,0 +1,39 @@
package nu.marginalia.api.favicon;
import com.google.inject.Inject;
import nu.marginalia.service.client.GrpcChannelPoolFactory;
import nu.marginalia.service.client.GrpcMultiNodeChannelPool;
import nu.marginalia.service.discovery.property.ServiceKey;
import nu.marginalia.service.discovery.property.ServicePartition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Optional;
public class FaviconClient {
private static final Logger logger = LoggerFactory.getLogger(FaviconClient.class);
private final GrpcMultiNodeChannelPool<FaviconAPIGrpc.FaviconAPIBlockingStub> channelPool;
@Inject
public FaviconClient(GrpcChannelPoolFactory factory) {
this.channelPool = factory.createMulti(
ServiceKey.forGrpcApi(FaviconAPIGrpc.class, ServicePartition.multi()),
FaviconAPIGrpc::newBlockingStub);
}
public record FaviconData(byte[] bytes, String contentType) {}
public Optional<FaviconData> getFavicon(String domain, int node) {
RpcFaviconResponse rsp = channelPool.call(FaviconAPIGrpc.FaviconAPIBlockingStub::getFavicon)
.forNode(node)
.run(RpcFaviconRequest.newBuilder().setDomain(domain).build());
if (rsp.getData().isEmpty())
return Optional.empty();
return Optional.of(new FaviconData(rsp.getData().toByteArray(), rsp.getContentType()));
}
}

View File

@@ -0,0 +1,20 @@
syntax="proto3";
package marginalia.api.favicon;
option java_package="nu.marginalia.api.favicon";
option java_multiple_files=true;
service FaviconAPI {
/** Fetches information about a domain. */
rpc getFavicon(RpcFaviconRequest) returns (RpcFaviconResponse) {}
}
message RpcFaviconRequest {
string domain = 1;
}
message RpcFaviconResponse {
string domain = 1;
bytes data = 2;
string contentType = 3;
}

View File

@@ -0,0 +1,49 @@
plugins {
id 'java'
id 'application'
id 'jvm-test-suite'
}
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(rootProject.ext.jvmVersion))
}
}
apply from: "$rootProject.projectDir/srcsets.gradle"
dependencies {
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation project(':code:common:model')
implementation project(':code:common:db')
implementation project(':code:functions:favicon:api')
implementation project(':code:processes:crawling-process')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
exclude group: 'com.google.guava'
}
}
implementation libs.notnull
implementation libs.guava
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation dependencies.create(libs.spark.get()) {
exclude group: 'org.eclipse.jetty'
}
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
}

View File

@@ -0,0 +1,48 @@
package nu.marginalia.functions.favicon;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import com.google.protobuf.ByteString;
import io.grpc.stub.StreamObserver;
import nu.marginalia.api.favicon.FaviconAPIGrpc;
import nu.marginalia.api.favicon.RpcFaviconRequest;
import nu.marginalia.api.favicon.RpcFaviconResponse;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.service.server.DiscoverableService;
import java.util.Optional;
@Singleton
public class FaviconGrpcService extends FaviconAPIGrpc.FaviconAPIImplBase implements DiscoverableService {
private final DomainStateDb domainStateDb;
@Inject
public FaviconGrpcService(DomainStateDb domainStateDb) {
this.domainStateDb = domainStateDb;
}
public boolean shouldRegisterService() {
return domainStateDb.isAvailable();
}
@Override
public void getFavicon(RpcFaviconRequest request, StreamObserver<RpcFaviconResponse> responseObserver) {
Optional<DomainStateDb.FaviconRecord> icon = domainStateDb.getIcon(request.getDomain());
RpcFaviconResponse response;
if (icon.isEmpty()) {
response = RpcFaviconResponse.newBuilder().build();
}
else {
var iconRecord = icon.get();
response = RpcFaviconResponse.newBuilder()
.setContentType(iconRecord.contentType())
.setDomain(request.getDomain())
.setData(ByteString.copyFrom(iconRecord.imageData()))
.build();
}
responseObserver.onNext(response);
responseObserver.onCompleted();
}
}

View File

@@ -34,6 +34,7 @@ dependencies {
implementation libs.bundles.slf4j
implementation libs.commons.lang3
implementation libs.commons.io
implementation libs.wiremock
implementation libs.prometheus
implementation libs.guava

View File

@@ -1,6 +1,7 @@
package nu.marginalia.livecapture;
import com.google.gson.Gson;
import nu.marginalia.WmsaHome;
import nu.marginalia.model.gson.GsonFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -12,6 +13,7 @@ import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;
import java.util.Map;
import java.util.Optional;
/** Client for local browserless.io API */
public class BrowserlessClient implements AutoCloseable {
@@ -27,13 +29,16 @@ public class BrowserlessClient implements AutoCloseable {
private final URI browserlessURI;
private final Gson gson = GsonFactory.get();
private final String userAgent = WmsaHome.getUserAgent().uaString();
public BrowserlessClient(URI browserlessURI) {
this.browserlessURI = browserlessURI;
}
public String content(String url, GotoOptions gotoOptions) throws IOException, InterruptedException {
public Optional<String> content(String url, GotoOptions gotoOptions) throws IOException, InterruptedException {
Map<String, Object> requestData = Map.of(
"url", url,
"userAgent", userAgent,
"gotoOptions", gotoOptions
);
@@ -49,10 +54,10 @@ public class BrowserlessClient implements AutoCloseable {
if (rsp.statusCode() >= 300) {
logger.info("Failed to fetch content for {}, status {}", url, rsp.statusCode());
return null;
return Optional.empty();
}
return rsp.body();
return Optional.of(rsp.body());
}
public byte[] screenshot(String url, GotoOptions gotoOptions, ScreenshotOptions screenshotOptions)
@@ -60,6 +65,7 @@ public class BrowserlessClient implements AutoCloseable {
Map<String, Object> requestData = Map.of(
"url", url,
"userAgent", userAgent,
"options", screenshotOptions,
"gotoOptions", gotoOptions
);
@@ -84,7 +90,7 @@ public class BrowserlessClient implements AutoCloseable {
}
@Override
public void close() throws Exception {
public void close() {
httpClient.shutdownNow();
}

View File

@@ -33,6 +33,7 @@ import java.sql.SQLException;
import java.time.*;
import java.time.format.DateTimeFormatter;
import java.util.*;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
@@ -71,7 +72,7 @@ public class FeedFetcherService {
public enum UpdateMode {
CLEAN,
REFRESH
};
}
public void updateFeeds(UpdateMode updateMode) throws IOException {
if (updating) // Prevent concurrent updates
@@ -87,6 +88,7 @@ public class FeedFetcherService {
.followRedirects(HttpClient.Redirect.NORMAL)
.version(HttpClient.Version.HTTP_2)
.build();
ExecutorService fetchExecutor = Executors.newCachedThreadPool();
FeedJournal feedJournal = FeedJournal.create();
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Update Rss Feeds")
) {
@@ -131,7 +133,7 @@ public class FeedFetcherService {
FetchResult feedData;
try (DomainLocks.DomainLock domainLock = domainLocks.lockDomain(new EdgeDomain(feed.domain()))) {
feedData = fetchFeedData(feed, client, ifModifiedSinceDate, ifNoneMatchTag);
feedData = fetchFeedData(feed, client, fetchExecutor, ifModifiedSinceDate, ifNoneMatchTag);
} catch (Exception ex) {
feedData = new FetchResult.TransientError();
}
@@ -211,6 +213,7 @@ public class FeedFetcherService {
private FetchResult fetchFeedData(FeedDefinition feed,
HttpClient client,
ExecutorService executorService,
@Nullable String ifModifiedSinceDate,
@Nullable String ifNoneMatchTag)
{
@@ -237,7 +240,14 @@ public class FeedFetcherService {
HttpRequest getRequest = requestBuilder.build();
for (int i = 0; i < 3; i++) {
HttpResponse<byte[]> rs = client.send(getRequest, HttpResponse.BodyHandlers.ofByteArray());
/* Note we need to use an executor to time-limit the send() method in HttpClient, as
* its support for timeouts only applies to the time until response starts to be received,
* and does not catch the case when the server starts to send data but then hangs.
*/
HttpResponse<byte[]> rs = executorService.submit(
() -> client.send(getRequest, HttpResponse.BodyHandlers.ofByteArray()))
.get(15, TimeUnit.SECONDS);
if (rs.statusCode() == 429) { // Too Many Requests
int retryAfter = Integer.parseInt(rs.headers().firstValue("Retry-After").orElse("2"));

View File

@@ -1,5 +1,9 @@
package nu.marginalia.livecapture;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.WmsaHome;
import nu.marginalia.service.module.ServiceConfigurationModule;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
@@ -8,34 +12,86 @@ import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
import java.io.IOException;
import java.net.URI;
import java.util.Map;
import static com.github.tomakehurst.wiremock.client.WireMock.*;
@Testcontainers
@Tag("slow")
public class BrowserlessClientTest {
static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome"))
.withEnv(Map.of("TOKEN", "BROWSERLESS_TOKEN"))
.withNetworkMode("bridge")
.withExposedPorts(3000);
static WireMockServer wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
static String localIp;
static URI browserlessURI;
@BeforeAll
public static void setup() {
public static void setup() throws IOException {
container.start();
browserlessURI = URI.create(String.format("http://%s:%d/",
container.getHost(),
container.getMappedPort(3000))
);
wireMockServer.start();
wireMockServer.stubFor(get("/").willReturn(aResponse().withStatus(200).withBody("Ok")));
localIp = ServiceConfigurationModule.getLocalNetworkIP();
}
@Tag("flaky")
@Test
public void testInspectContentUA__Flaky() throws Exception {
try (var client = new BrowserlessClient(browserlessURI)) {
client.content("http://" + localIp + ":18089/",
BrowserlessClient.GotoOptions.defaultValues()
);
}
wireMockServer.verify(getRequestedFor(urlEqualTo("/")).withHeader("User-Agent", equalTo(WmsaHome.getUserAgent().uaString())));
}
@Tag("flaky")
@Test
public void testInspectScreenshotUA__Flaky() throws Exception {
try (var client = new BrowserlessClient(browserlessURI)) {
client.screenshot("http://" + localIp + ":18089/",
BrowserlessClient.GotoOptions.defaultValues(),
BrowserlessClient.ScreenshotOptions.defaultValues()
);
}
wireMockServer.verify(getRequestedFor(urlEqualTo("/")).withHeader("User-Agent", equalTo(WmsaHome.getUserAgent().uaString())));
}
@Test
public void testContent() throws Exception {
try (var client = new BrowserlessClient(URI.create("http://" + container.getHost() + ":" + container.getMappedPort(3000)))) {
var content = client.content("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues());
Assertions.assertNotNull(content, "Content should not be null");
try (var client = new BrowserlessClient(browserlessURI)) {
var content = client.content("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues()).orElseThrow();
Assertions.assertFalse(content.isBlank(), "Content should not be empty");
}
}
@Test
public void testScreenshot() throws Exception {
try (var client = new BrowserlessClient(URI.create("http://" + container.getHost() + ":" + container.getMappedPort(3000)))) {
var screenshot = client.screenshot("https://www.marginalia.nu/", BrowserlessClient.GotoOptions.defaultValues(), BrowserlessClient.ScreenshotOptions.defaultValues());
try (var client = new BrowserlessClient(browserlessURI)) {
var screenshot = client.screenshot("https://www.marginalia.nu/",
BrowserlessClient.GotoOptions.defaultValues(),
BrowserlessClient.ScreenshotOptions.defaultValues());
Assertions.assertNotNull(screenshot, "Screenshot should not be null");
}
}

View File

@@ -134,6 +134,10 @@ public class QueryExpansion {
if (scoreCombo > scoreA + scoreB || scoreCombo > 1000) {
graph.addVariantForSpan(prev, qw, joinedWord);
}
else if (StringUtils.isAlpha(prev.word()) && StringUtils.isNumeric(qw.word())) { // join e.g. trs 80 to trs80 and trs-80
graph.addVariantForSpan(prev, qw, prev.word() + qw.word());
graph.addVariantForSpan(prev, qw, prev.word() + "-" + qw.word());
}
}
prev = qw;

View File

@@ -213,6 +213,18 @@ public class QueryFactoryTest {
System.out.println(subquery);
}
@Test
public void testContractionWordNum() {
var subquery = parseAndGetSpecs("glove 80");
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" 80 "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove-80 "));
Assertions.assertTrue(subquery.query.compiledQuery.contains(" glove80 "));
}
@Test
public void testCplusPlus() {
var subquery = parseAndGetSpecs("std::vector::push_back vector");

View File

@@ -23,16 +23,33 @@ public class SimpleBlockingThreadPool {
private final Logger logger = LoggerFactory.getLogger(SimpleBlockingThreadPool.class);
public SimpleBlockingThreadPool(String name, int poolSize, int queueSize) {
this(name, poolSize, queueSize, ThreadType.PLATFORM);
}
public SimpleBlockingThreadPool(String name, int poolSize, int queueSize, ThreadType threadType) {
tasks = new ArrayBlockingQueue<>(queueSize);
for (int i = 0; i < poolSize; i++) {
Thread worker = new Thread(this::worker, name + "[" + i + "]");
worker.setDaemon(true);
worker.start();
Thread.Builder threadBuilder = switch (threadType) {
case VIRTUAL -> Thread.ofVirtual();
case PLATFORM -> Thread.ofPlatform().daemon(true);
};
Thread worker = threadBuilder
.name(name + "[" + i + "]")
.start(this::worker);
workers.add(worker);
}
}
public enum ThreadType {
VIRTUAL,
PLATFORM
}
public void submit(Task task) throws InterruptedException {
tasks.put(task);
}

View File

@@ -5,9 +5,7 @@ import nu.marginalia.actor.state.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.*;
public abstract class RecordActorPrototype implements ActorPrototype {
@@ -118,7 +116,7 @@ public abstract class RecordActorPrototype implements ActorPrototype {
}
private String functionName(Class<? extends ActorStep> functionClass) {
return functionClass.getSimpleName().toUpperCase();
return ActorStep.functionName(functionClass);
}
private ActorStep constructState(String message) throws ReflectiveOperationException {
@@ -145,4 +143,43 @@ public abstract class RecordActorPrototype implements ActorPrototype {
}
}
/** Get a list of JSON prototypes for each actor step declared by this actor */
@SuppressWarnings("unchecked")
public Map<String, String> getMessagePrototypes() {
Map<String, String> messagePrototypes = new HashMap<>();
for (var clazz : getClass().getDeclaredClasses()) {
if (!clazz.isRecord() || !ActorStep.class.isAssignableFrom(clazz))
continue;
StringJoiner sj = new StringJoiner(",\n\t", "{\n\t", "\n}");
renderToJsonPrototype(sj, (Class<? extends Record>) clazz);
messagePrototypes.put(ActorStep.functionName((Class<? extends ActorStep>) clazz), sj.toString());
}
return messagePrototypes;
}
@SuppressWarnings("unchecked")
private void renderToJsonPrototype(StringJoiner sj, Class<? extends Record> recordType) {
for (var field : recordType.getDeclaredFields()) {
String typeName = field.getType().getSimpleName();
if ("List".equals(typeName)) {
sj.add(String.format("\"%s\": [ ]", field.getName()));
}
else if (field.getType().isRecord()) {
var innerSj = new StringJoiner(",", "{", "}");
renderToJsonPrototype(innerSj, (Class<? extends Record>) field.getType());
sj.add(String.format("\"%s\": %s", field.getName(), sj));
}
else {
sj.add(String.format("\"%s\": \"%s\"", field.getName(), typeName));
}
}
}
}

View File

@@ -1,3 +1,7 @@
package nu.marginalia.actor.state;
public interface ActorStep {}
public interface ActorStep {
static String functionName(Class<? extends ActorStep> type) {
return type.getSimpleName().toUpperCase();
}
}

View File

@@ -87,6 +87,8 @@ dependencies {
implementation libs.commons.compress
implementation libs.sqlite
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito

View File

@@ -35,6 +35,7 @@ import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Optional;
@@ -202,13 +203,19 @@ public class ConverterMain extends ProcessMainClass {
heartbeat.setProgress(processedDomains.get() / (double) totalDomains);
logger.info("Processing small items");
int numBigTasks = 0;
// We separate the large and small domains to reduce the number of critical sections,
// as the large domains have a separate processing track that doesn't store everything
// in memory
final List<Path> bigTasks = new ArrayList<>();
// First process the small items
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{
if (SerializableCrawlDataStream.getSizeHint(dataPath) >= SIDELOAD_THRESHOLD) {
numBigTasks ++;
bigTasks.add(dataPath);
continue;
}
@@ -239,15 +246,8 @@ public class ConverterMain extends ProcessMainClass {
try (var hb = heartbeat.createAdHocTaskHeartbeat("Large Domains")) {
int bigTaskIdx = 0;
// Next the big items domain-by-domain
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{
int sizeHint = SerializableCrawlDataStream.getSizeHint(dataPath);
if (sizeHint < SIDELOAD_THRESHOLD) {
continue;
}
hb.progress(dataPath.toFile().getName(), bigTaskIdx++, numBigTasks);
for (var dataPath : bigTasks) {
hb.progress(dataPath.toFile().getName(), bigTaskIdx++, bigTasks.size());
try {
// SerializableCrawlDataStream is autocloseable, we can't try-with-resources because then it will be
@@ -255,7 +255,7 @@ public class ConverterMain extends ProcessMainClass {
// will close it after it's consumed.
var stream = SerializableCrawlDataStream.openDataStream(dataPath);
ConverterBatchWritableIf writable = processor.simpleProcessing(stream, sizeHint);
ConverterBatchWritableIf writable = processor.simpleProcessing(stream, SerializableCrawlDataStream.getSizeHint(dataPath));
converterWriter.accept(writable);
}

View File

@@ -116,7 +116,7 @@ public class AdblockSimulator {
// Refrain from cleaning up this code, it's very hot code and needs to be fast.
// This version is about 100x faster than the a "clean" first stab implementation.
// This version is about 100x faster than a "clean" first stab implementation.
class RuleVisitor implements NodeFilter {
public boolean sawAds;

View File

@@ -23,7 +23,7 @@ public class DocumentGeneratorExtractor {
var tags = doc.select("meta[name=generator]");
if (tags.size() == 0) {
if (tags.isEmpty()) {
// Some sites have a comment in the head instead of a meta tag
return fingerprintServerTech(doc, responseHeaders);
}

View File

@@ -127,7 +127,7 @@ public class EncyclopediaMarginaliaNuSideloader implements SideloadSource, AutoC
}
fullHtml.append("</div></body></html>");
var doc = sideloaderProcessing
return sideloaderProcessing
.processDocument(fullUrl,
fullHtml.toString(),
List.of("encyclopedia", "wiki"),
@@ -137,8 +137,6 @@ public class EncyclopediaMarginaliaNuSideloader implements SideloadSource, AutoC
anchorTextKeywords.getAnchorTextKeywords(domainLinks, new EdgeUrl(fullUrl)),
LocalDate.now().getYear(),
10_000_000);
return doc;
}
private String normalizeUtf8(String url) {

View File

@@ -11,7 +11,6 @@ import nu.marginalia.slop.column.primitive.IntColumn;
import nu.marginalia.slop.column.primitive.LongColumn;
import nu.marginalia.slop.column.string.EnumColumn;
import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.column.string.TxtStringColumn;
import nu.marginalia.slop.desc.StorageType;
import org.jetbrains.annotations.Nullable;
@@ -182,8 +181,8 @@ public record SlopDocumentRecord(
}
// Basic information
private static final TxtStringColumn domainsColumn = new TxtStringColumn("domain", StandardCharsets.UTF_8, StorageType.GZIP);
private static final TxtStringColumn urlsColumn = new TxtStringColumn("url", StandardCharsets.UTF_8, StorageType.GZIP);
private static final StringColumn domainsColumn = new StringColumn("domain", StandardCharsets.UTF_8, StorageType.GZIP);
private static final StringColumn urlsColumn = new StringColumn("url", StandardCharsets.UTF_8, StorageType.GZIP);
private static final VarintColumn ordinalsColumn = new VarintColumn("ordinal", StorageType.PLAIN);
private static final EnumColumn statesColumn = new EnumColumn("state", StandardCharsets.US_ASCII, StorageType.PLAIN);
private static final StringColumn stateReasonsColumn = new StringColumn("stateReason", StandardCharsets.US_ASCII, StorageType.GZIP);
@@ -211,7 +210,7 @@ public record SlopDocumentRecord(
private static final VarintCodedSequenceArrayColumn spansColumn = new VarintCodedSequenceArrayColumn("spans", StorageType.ZSTD);
public static class KeywordsProjectionReader extends SlopTable {
private final TxtStringColumn.Reader domainsReader;
private final StringColumn.Reader domainsReader;
private final VarintColumn.Reader ordinalsReader;
private final IntColumn.Reader htmlFeaturesReader;
private final LongColumn.Reader domainMetadataReader;
@@ -275,8 +274,8 @@ public record SlopDocumentRecord(
}
public static class MetadataReader extends SlopTable {
private final TxtStringColumn.Reader domainsReader;
private final TxtStringColumn.Reader urlsReader;
private final StringColumn.Reader domainsReader;
private final StringColumn.Reader urlsReader;
private final VarintColumn.Reader ordinalsReader;
private final StringColumn.Reader titlesReader;
private final StringColumn.Reader descriptionsReader;
@@ -332,8 +331,8 @@ public record SlopDocumentRecord(
}
public static class Writer extends SlopTable {
private final TxtStringColumn.Writer domainsWriter;
private final TxtStringColumn.Writer urlsWriter;
private final StringColumn.Writer domainsWriter;
private final StringColumn.Writer urlsWriter;
private final VarintColumn.Writer ordinalsWriter;
private final EnumColumn.Writer statesWriter;
private final StringColumn.Writer stateReasonsWriter;

View File

@@ -8,7 +8,6 @@ import nu.marginalia.converting.model.ProcessedDomain;
import nu.marginalia.converting.processor.DomainProcessor;
import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -247,7 +246,7 @@ public class CrawlingThenConvertingIntegrationTest {
private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception {
List<SerializableCrawlData> data = new ArrayList<>();
try (var recorder = new WarcRecorder(fileName, new Cookies());
try (var recorder = new WarcRecorder(fileName);
var db = new DomainStateDb(dbTempFile))
{
new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain();

View File

@@ -60,10 +60,14 @@ dependencies {
implementation libs.fastutil
implementation libs.bundles.mariadb
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito
testImplementation libs.wiremock
testImplementation project(':code:processes:test-data')
}

View File

@@ -41,10 +41,8 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.security.Security;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
@@ -68,6 +66,7 @@ public class CrawlerMain extends ProcessMainClass {
private final DomainLocks domainLocks = new DomainLocks();
private final Map<String, CrawlTask> pendingCrawlTasks = new ConcurrentHashMap<>();
private final ArrayBlockingQueue<CrawlTask> retryQueue = new ArrayBlockingQueue<>(64);
private final AtomicInteger tasksDone = new AtomicInteger(0);
private final HttpFetcherImpl fetcher;
@@ -106,9 +105,18 @@ public class CrawlerMain extends ProcessMainClass {
this.blacklist = blacklist;
this.node = processConfiguration.node();
SimpleBlockingThreadPool.ThreadType threadType;
if (Boolean.getBoolean("crawler.useVirtualThreads")) {
threadType = SimpleBlockingThreadPool.ThreadType.VIRTUAL;
}
else {
threadType = SimpleBlockingThreadPool.ThreadType.PLATFORM;
}
pool = new SimpleBlockingThreadPool("CrawlerPool",
Integer.getInteger("crawler.poolSize", 256),
1);
1,
threadType);
// Wait for the blacklist to be loaded before starting the crawl
@@ -224,10 +232,7 @@ public class CrawlerMain extends ProcessMainClass {
logger.info("Loaded {} domains", crawlSpecRecords.size());
// Shuffle the domains to ensure we get a good mix of domains in each crawl,
// so that e.g. the big domains don't get all crawled at once, or we end up
// crawling the same server in parallel from different subdomains...
Collections.shuffle(crawlSpecRecords);
crawlSpecRecords.sort(crawlSpecArrangement(crawlSpecRecords));
// First a validation run to ensure the file is all good to parse
if (crawlSpecRecords.isEmpty()) {
@@ -248,9 +253,14 @@ public class CrawlerMain extends ProcessMainClass {
// (this happens when the process is restarted after a crash or a shutdown)
tasksDone.set(workLog.countFinishedJobs());
// Create crawl tasks and submit them to the pool for execution
// List of deferred tasks used to ensure beneficial scheduling of domains with regard to DomainLocks,
// merely shuffling the domains tends to lead to a lot of threads being blocked waiting for a semphore,
// this will more aggressively attempt to schedule the jobs to avoid blocking
List<CrawlTask> taskList = new ArrayList<>();
// Create crawl tasks
for (CrawlSpecRecord crawlSpec : crawlSpecRecords) {
if (workLog.isJobFinished(crawlSpec.domain()))
if (workLog.isJobFinished(crawlSpec.domain))
continue;
var task = new CrawlTask(
@@ -261,8 +271,33 @@ public class CrawlerMain extends ProcessMainClass {
domainStateDb,
workLog);
if (pendingCrawlTasks.putIfAbsent(crawlSpec.domain(), task) == null) {
pool.submitQuietly(task);
// Try to run immediately, to avoid unnecessarily keeping the entire work set in RAM
if (!trySubmitDeferredTask(task)) {
// Otherwise add to the taskList for deferred execution
taskList.add(task);
}
}
// Schedule viable tasks for execution until list is empty
for (int emptyRuns = 0;emptyRuns < 300;) {
boolean hasTasks = !taskList.isEmpty();
// The order of these checks very important to avoid a race condition
// where we miss a task that is put into the retry queue
boolean hasRunningTasks = pool.getActiveCount() > 0;
boolean hasRetryTasks = !retryQueue.isEmpty();
if (hasTasks || hasRetryTasks || hasRunningTasks) {
retryQueue.drainTo(taskList);
taskList.removeIf(this::trySubmitDeferredTask);
// Add a small pause here to avoid busy looping toward the end of the execution cycle when
// we might have no new viable tasks to run for hours on end
TimeUnit.MILLISECONDS.sleep(50);
} else {
// We have no tasks to run, and no tasks in the retry queue
// but we wait a bit to see if any new tasks come in via the retry queue
emptyRuns++;
TimeUnit.SECONDS.sleep(1);
}
}
@@ -290,6 +325,52 @@ public class CrawlerMain extends ProcessMainClass {
}
}
/** Create a comparator that sorts the crawl specs in a way that is beneficial for the crawl,
* we want to enqueue domains that have common top domains first, but otherwise have a random
* order.
* <p></p>
* Note, we can't use hash codes for randomization as it is not desirable to have the same order
* every time the process is restarted (and CrawlSpecRecord is a record, which defines equals and
* hashcode based on the fields).
* */
private Comparator<CrawlSpecRecord> crawlSpecArrangement(List<CrawlSpecRecord> records) {
Random r = new Random();
Map<String, Integer> topDomainCounts = new HashMap<>(4 + (int) Math.sqrt(records.size()));
Map<String, Integer> randomOrder = new HashMap<>(records.size());
for (var spec : records) {
topDomainCounts.merge(EdgeDomain.getTopDomain(spec.domain), 1, Integer::sum);
randomOrder.put(spec.domain, r.nextInt());
}
return Comparator.comparing((CrawlSpecRecord spec) -> topDomainCounts.getOrDefault(EdgeDomain.getTopDomain(spec.domain), 0) >= 8)
.reversed()
.thenComparing(spec -> randomOrder.get(spec.domain))
.thenComparing(Record::hashCode); // non-deterministic tie-breaker to
}
/** Submit a task for execution if it can be run, returns true if it was submitted
* or if it can be discarded */
private boolean trySubmitDeferredTask(CrawlTask task) {
if (!task.canRun()) {
return false;
}
if (pendingCrawlTasks.putIfAbsent(task.domain, task) != null) {
return true; // task has already run, duplicate in crawl specs
}
try {
// This blocks the caller when the pool is full
pool.submitQuietly(task);
return true;
}
catch (RuntimeException ex) {
logger.error("Failed to submit task " + task.domain, ex);
return false;
}
}
public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception {
runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath());
}
@@ -346,69 +427,93 @@ public class CrawlerMain extends ProcessMainClass {
this.id = Integer.toHexString(domain.hashCode());
}
/** Best effort indicator whether we could start this now without getting stuck in
* DomainLocks purgatory */
public boolean canRun() {
return domainLocks.canLock(new EdgeDomain(domain));
}
@Override
public void run() throws Exception {
Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE);
Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP);
Path slopFile = CrawlerOutputFile.createSlopPath(outputDir, id, domain);
// Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data
// while writing to the same file name as before
if (Files.exists(newWarcFile)) {
Files.move(newWarcFile, tempFile, StandardCopyOption.REPLACE_EXISTING);
}
else {
Files.deleteIfExists(tempFile);
if (workLog.isJobFinished(domain)) { // No-Op
logger.info("Omitting task {}, as it is already run", domain);
return;
}
try (var warcRecorder = new WarcRecorder(newWarcFile, fetcher); // write to a temp file for now
var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder);
CrawlDataReference reference = getReference()
)
{
// Resume the crawl if it was aborted
if (Files.exists(tempFile)) {
retriever.syncAbortedRun(tempFile);
Files.delete(tempFile);
Optional<DomainLocks.DomainLock> lock = domainLocks.tryLockDomain(new EdgeDomain(domain));
// We don't have a lock, so we can't run this task
// we return to avoid blocking the pool for too long
if (lock.isEmpty()) {
if (retryQueue.remainingCapacity() > 0) {
// Sleep a moment to avoid busy looping via the retry queue
// in the case when few tasks remain
Thread.sleep(10);
}
DomainLinks domainLinks = anchorTagsSource.getAnchorTags(domain);
retryQueue.put(this);
return;
}
DomainLocks.DomainLock domainLock = lock.get();
int size;
try (var lock = domainLocks.lockDomain(new EdgeDomain(domain))) {
size = retriever.crawlDomain(domainLinks, reference);
try (domainLock) {
Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE);
Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP);
Path slopFile = CrawlerOutputFile.createSlopPath(outputDir, id, domain);
// Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data
// while writing to the same file name as before
if (Files.exists(newWarcFile)) {
Files.move(newWarcFile, tempFile, StandardCopyOption.REPLACE_EXISTING);
}
else {
Files.deleteIfExists(tempFile);
}
// Delete the reference crawl data if it's not the same as the new one
// (mostly a case when migrating from legacy->warc)
reference.delete();
try (var warcRecorder = new WarcRecorder(newWarcFile); // write to a temp file for now
var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder);
CrawlDataReference reference = getReference())
{
// Resume the crawl if it was aborted
if (Files.exists(tempFile)) {
retriever.syncAbortedRun(tempFile);
Files.delete(tempFile);
}
// Convert the WARC file to Parquet
SlopCrawlDataRecord
.convertWarc(domain, userAgent, newWarcFile, slopFile);
DomainLinks domainLinks = anchorTagsSource.getAnchorTags(domain);
// Optionally archive the WARC file if full retention is enabled,
// otherwise delete it:
warcArchiver.consumeWarc(newWarcFile, domain);
int size = retriever.crawlDomain(domainLinks, reference);
// Mark the domain as finished in the work log
workLog.setJobToFinished(domain, slopFile.toString(), size);
// Delete the reference crawl data if it's not the same as the new one
// (mostly a case when migrating from legacy->warc)
reference.delete();
// Update the progress bar
heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks);
// Convert the WARC file to Parquet
SlopCrawlDataRecord
.convertWarc(domain, userAgent, newWarcFile, slopFile);
logger.info("Fetched {}", domain);
} catch (Exception e) {
logger.error("Error fetching domain " + domain, e);
}
finally {
// We don't need to double-count these; it's also kept int he workLog
pendingCrawlTasks.remove(domain);
Thread.currentThread().setName("[idle]");
// Optionally archive the WARC file if full retention is enabled,
// otherwise delete it:
warcArchiver.consumeWarc(newWarcFile, domain);
Files.deleteIfExists(newWarcFile);
Files.deleteIfExists(tempFile);
// Mark the domain as finished in the work log
workLog.setJobToFinished(domain, slopFile.toString(), size);
// Update the progress bar
heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks);
logger.info("Fetched {}", domain);
} catch (Exception e) {
logger.error("Error fetching domain " + domain, e);
}
finally {
// We don't need to double-count these; it's also kept in the workLog
pendingCrawlTasks.remove(domain);
Thread.currentThread().setName("[idle]");
Files.deleteIfExists(newWarcFile);
Files.deleteIfExists(tempFile);
}
}
}
@@ -425,7 +530,7 @@ public class CrawlerMain extends ProcessMainClass {
return new CrawlDataReference(slopPath);
}
} catch (IOException e) {
} catch (Exception e) {
logger.debug("Failed to read previous crawl data for {}", specification.domain());
}
@@ -494,7 +599,7 @@ public class CrawlerMain extends ProcessMainClass {
//
// This must be synchronized as chewing through parquet files in parallel leads to enormous memory overhead
private synchronized Path migrateParquetData(Path inputPath, String domain, Path crawlDataRoot) throws IOException {
if (!inputPath.endsWith(".parquet")) {
if (!inputPath.toString().endsWith(".parquet")) {
return inputPath;
}

View File

@@ -1,5 +1,8 @@
package nu.marginalia.crawl;
import com.google.inject.Inject;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageType;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -8,6 +11,7 @@ import java.nio.file.Path;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.time.Duration;
import java.time.Instant;
import java.util.Objects;
import java.util.Optional;
@@ -21,6 +25,17 @@ public class DomainStateDb implements AutoCloseable {
private final Connection connection;
public record CrawlMeta(
String domainName,
Instant lastFullCrawl,
Duration recrawlTime,
Duration crawlTime,
int recrawlErrors,
int crawlChanges,
int totalCrawlSize
) {}
public record SummaryRecord(
String domainName,
Instant lastUpdated,
@@ -63,7 +78,29 @@ public class DomainStateDb implements AutoCloseable {
public record FaviconRecord(String contentType, byte[] imageData) {}
public DomainStateDb(Path filename) throws SQLException {
@Inject
public DomainStateDb(FileStorageService fileStorageService) throws SQLException {
this(findFilename(fileStorageService));
}
private static Path findFilename(FileStorageService fileStorageService) throws SQLException {
var fsId = fileStorageService.getOnlyActiveFileStorage(FileStorageType.CRAWL_DATA);
if (fsId.isPresent()) {
var fs = fileStorageService.getStorage(fsId.get());
return fs.asPath().resolve("domainstate.db");
}
else {
return null;
}
}
public DomainStateDb(@Nullable Path filename) throws SQLException {
if (null == filename) {
connection = null;
return;
}
String sqliteDbString = "jdbc:sqlite:" + filename.toString();
connection = DriverManager.getConnection(sqliteDbString);
@@ -77,6 +114,17 @@ public class DomainStateDb implements AutoCloseable {
feedUrl TEXT
)
""");
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS crawl_meta (
domain TEXT PRIMARY KEY,
lastFullCrawlEpochMs LONG NOT NULL,
recrawlTimeMs LONG NOT NULL,
recrawlErrors INTEGER NOT NULL,
crawlTimeMs LONG NOT NULL,
crawlChanges INTEGER NOT NULL,
totalCrawlSize INTEGER NOT NULL
)
""");
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS favicon (
domain TEXT PRIMARY KEY,
@@ -90,11 +138,18 @@ public class DomainStateDb implements AutoCloseable {
@Override
public void close() throws SQLException {
connection.close();
if (connection != null) {
connection.close();
}
}
public boolean isAvailable() {
return connection != null;
}
public void saveIcon(String domain, FaviconRecord faviconRecord) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO favicon (domain, contentType, icon)
VALUES(?, ?, ?)
@@ -110,6 +165,9 @@ public class DomainStateDb implements AutoCloseable {
}
public Optional<FaviconRecord> getIcon(String domain) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("SELECT contentType, icon FROM favicon WHERE DOMAIN = ?")) {
stmt.setString(1, domain);
var rs = stmt.executeQuery();
@@ -129,7 +187,29 @@ public class DomainStateDb implements AutoCloseable {
return Optional.empty();
}
public void save(CrawlMeta crawlMeta) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO crawl_meta (domain, lastFullCrawlEpochMs, recrawlTimeMs, recrawlErrors, crawlTimeMs, crawlChanges, totalCrawlSize)
VALUES (?, ?, ?, ?, ?, ?, ?)
""")) {
stmt.setString(1, crawlMeta.domainName());
stmt.setLong(2, crawlMeta.lastFullCrawl.toEpochMilli());
stmt.setLong(3, crawlMeta.recrawlTime.toMillis());
stmt.setInt(4, crawlMeta.recrawlErrors);
stmt.setLong(5, crawlMeta.crawlTime.toMillis());
stmt.setInt(6, crawlMeta.crawlChanges);
stmt.setInt(7, crawlMeta.totalCrawlSize);
stmt.executeUpdate();
} catch (SQLException e) {
logger.error("Failed to insert crawl meta record", e);
}
}
public void save(SummaryRecord record) {
if (connection == null) throw new IllegalStateException("No connection to domainstate db");
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl)
VALUES (?, ?, ?, ?, ?)
@@ -145,7 +225,38 @@ public class DomainStateDb implements AutoCloseable {
}
}
public Optional<SummaryRecord> get(String domainName) {
public Optional<CrawlMeta> getMeta(String domainName) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("""
SELECT domain, lastFullCrawlEpochMs, recrawlTimeMs, recrawlErrors, crawlTimeMs, crawlChanges, totalCrawlSize
FROM crawl_meta
WHERE domain = ?
""")) {
stmt.setString(1, domainName);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(new CrawlMeta(
rs.getString("domain"),
Instant.ofEpochMilli(rs.getLong("lastFullCrawlEpochMs")),
Duration.ofMillis(rs.getLong("recrawlTimeMs")),
Duration.ofMillis(rs.getLong("crawlTimeMs")),
rs.getInt("recrawlErrors"),
rs.getInt("crawlChanges"),
rs.getInt("totalCrawlSize")
));
}
} catch (SQLException ex) {
logger.error("Failed to get crawl meta record", ex);
}
return Optional.empty();
}
public Optional<SummaryRecord> getSummary(String domainName) {
if (connection == null)
return Optional.empty();
try (var stmt = connection.prepareStatement("""
SELECT domain, lastUpdatedEpochMs, state, stateDesc, feedUrl
FROM summary

View File

@@ -1,6 +1,6 @@
package nu.marginalia.crawl.fetcher;
import java.net.http.HttpRequest;
import org.apache.hc.client5.http.classic.methods.HttpGet;
/** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */
public record ContentTags(String etag, String lastMod) {
@@ -17,14 +17,14 @@ public record ContentTags(String etag, String lastMod) {
}
/** Paints the tags onto the request builder. */
public void paint(HttpRequest.Builder getBuilder) {
public void paint(HttpGet request) {
if (etag != null) {
getBuilder.header("If-None-Match", etag);
request.addHeader("If-None-Match", etag);
}
if (lastMod != null) {
getBuilder.header("If-Modified-Since", lastMod);
request.addHeader("If-Modified-Since", lastMod);
}
}
}

View File

@@ -1,34 +0,0 @@
package nu.marginalia.crawl.fetcher;
import java.io.IOException;
import java.net.CookieHandler;
import java.net.URI;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
public class Cookies extends CookieHandler {
final ThreadLocal<ConcurrentHashMap<String, List<String>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new);
public void clear() {
cookieJar.get().clear();
}
public boolean hasCookies() {
return !cookieJar.get().isEmpty();
}
public List<String> getCookies() {
return cookieJar.get().values().stream().flatMap(List::stream).toList();
}
@Override
public Map<String, List<String>> get(URI uri, Map<String, List<String>> requestHeaders) throws IOException {
return cookieJar.get();
}
@Override
public void put(URI uri, Map<String, List<String>> responseHeaders) throws IOException {
cookieJar.get().putAll(responseHeaders);
}
}

View File

@@ -0,0 +1,56 @@
package nu.marginalia.crawl.fetcher;
import org.apache.hc.client5.http.classic.methods.HttpUriRequestBase;
import org.apache.hc.core5.http.ClassicHttpRequest;
import org.apache.hc.core5.http.HttpResponse;
import java.util.HashMap;
import java.util.Map;
import java.util.StringJoiner;
public class DomainCookies {
private final Map<String, String> cookies = new HashMap<>();
public boolean hasCookies() {
return !cookies.isEmpty();
}
public void updateCookieStore(HttpResponse response) {
for (var header : response.getHeaders()) {
if (header.getName().equalsIgnoreCase("Set-Cookie")) {
parseCookieHeader(header.getValue());
}
}
}
private void parseCookieHeader(String value) {
// Parse the Set-Cookie header value and extract the cookies
String[] parts = value.split(";");
String cookie = parts[0].trim();
if (cookie.contains("=")) {
String[] cookieParts = cookie.split("=");
String name = cookieParts[0].trim();
String val = cookieParts[1].trim();
cookies.put(name, val);
}
}
public void paintRequest(HttpUriRequestBase request) {
request.addHeader("Cookie", createCookieHeader());
}
public void paintRequest(ClassicHttpRequest request) {
request.addHeader("Cookie", createCookieHeader());
}
private String createCookieHeader() {
StringJoiner sj = new StringJoiner("; ");
for (var cookie : cookies.entrySet()) {
sj.add(cookie.getKey() + "=" + cookie.getValue());
}
return sj.toString();
}
}

View File

@@ -8,6 +8,7 @@ import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.apache.hc.client5.http.cookie.CookieStore;
import java.util.List;
@@ -15,20 +16,17 @@ import java.util.List;
public interface HttpFetcher extends AutoCloseable {
void setAllowAllContentTypes(boolean allowAllContentTypes);
Cookies getCookies();
CookieStore getCookies();
void clearCookies();
DomainProbeResult probeDomain(EdgeUrl url);
ContentTypeProbeResult probeContentType(
EdgeUrl url,
WarcRecorder recorder,
ContentTags tags) throws HttpFetcherImpl.RateLimitException;
HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder recorder,
DomainCookies cookies,
CrawlDelayTimer timer,
ContentTags tags,
ProbeType probeType) throws Exception;
ProbeType probeType);
List<EdgeUrl> fetchSitemapUrls(String rootSitemapUrl, CrawlDelayTimer delayTimer);
@@ -46,6 +44,7 @@ public interface HttpFetcher extends AutoCloseable {
/** This domain redirects to another domain */
record Redirect(EdgeDomain domain) implements DomainProbeResult {}
record RedirectSameDomain_Internal(EdgeUrl domain) implements DomainProbeResult {}
/** If the retrieval of the probed url was successful, return the url as it was fetched
* (which may be different from the url we probed, if we attempted another URL schema).
@@ -56,7 +55,10 @@ public interface HttpFetcher extends AutoCloseable {
}
sealed interface ContentTypeProbeResult {
record NoOp() implements ContentTypeProbeResult {}
record Ok(EdgeUrl resolvedUrl) implements ContentTypeProbeResult { }
record HttpError(int statusCode, String message) implements ContentTypeProbeResult { }
record Redirect(EdgeUrl location) implements ContentTypeProbeResult { }
record BadContentType(String contentType, int statusCode) implements ContentTypeProbeResult { }
record Timeout(java.lang.Exception ex) implements ContentTypeProbeResult { }
record Exception(java.lang.Exception ex) implements ContentTypeProbeResult { }

View File

@@ -5,66 +5,167 @@ import com.google.inject.Singleton;
import crawlercommons.robots.SimpleRobotRules;
import crawlercommons.robots.SimpleRobotRulesParser;
import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.socket.NoSecuritySSL;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.ContentTypeLogic;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.apache.hc.client5.http.ConnectionKeepAliveStrategy;
import org.apache.hc.client5.http.HttpRequestRetryStrategy;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.config.ConnectionConfig;
import org.apache.hc.client5.http.config.RequestConfig;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.apache.hc.client5.http.cookie.CookieStore;
import org.apache.hc.client5.http.cookie.StandardCookieSpec;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManagerBuilder;
import org.apache.hc.client5.http.ssl.DefaultClientTlsStrategy;
import org.apache.hc.core5.http.*;
import org.apache.hc.core5.http.io.HttpClientResponseHandler;
import org.apache.hc.core5.http.io.SocketConfig;
import org.apache.hc.core5.http.io.entity.EntityUtils;
import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
import org.apache.hc.core5.http.message.MessageSupport;
import org.apache.hc.core5.http.protocol.HttpContext;
import org.apache.hc.core5.pool.PoolStats;
import org.apache.hc.core5.util.TimeValue;
import org.apache.hc.core5.util.Timeout;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.parser.Parser;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.Marker;
import org.slf4j.MarkerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLException;
import java.io.IOException;
import java.io.InputStream;
import java.net.SocketTimeoutException;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpTimeoutException;
import java.security.NoSuchAlgorithmException;
import java.time.Duration;
import java.util.*;
import java.util.concurrent.Executors;
import java.util.zip.GZIPInputStream;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
@Singleton
public class HttpFetcherImpl implements HttpFetcher {
public class HttpFetcherImpl implements HttpFetcher, HttpRequestRetryStrategy {
private final Logger logger = LoggerFactory.getLogger(getClass());
private final String userAgentString;
private final String userAgentIdentifier;
private final Cookies cookies = new Cookies();
private final CookieStore cookies = new BasicCookieStore();
private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser();
private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic();
private final Marker crawlerAuditMarker = MarkerFactory.getMarker("CRAWLER");
private final Duration requestTimeout = Duration.ofSeconds(10);
private final LinkParser linkParser = new LinkParser();
@Override
public void setAllowAllContentTypes(boolean allowAllContentTypes) {
contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes);
}
private final HttpClient client;
private final CloseableHttpClient client;
private PoolingHttpClientConnectionManager connectionManager;
private HttpClient createClient() {
return HttpClient.newBuilder()
.sslContext(NoSecuritySSL.buildSslContext())
.cookieHandler(cookies)
.followRedirects(HttpClient.Redirect.NORMAL)
.connectTimeout(Duration.ofSeconds(8))
.executor(Executors.newCachedThreadPool())
public PoolStats getPoolStats() {
return connectionManager.getTotalStats();
}
private CloseableHttpClient createClient() throws NoSuchAlgorithmException {
final ConnectionConfig connectionConfig = ConnectionConfig.custom()
.setSocketTimeout(10, TimeUnit.SECONDS)
.setConnectTimeout(30, TimeUnit.SECONDS)
.setValidateAfterInactivity(TimeValue.ofSeconds(5))
.build();
connectionManager = PoolingHttpClientConnectionManagerBuilder.create()
.setMaxConnPerRoute(2)
.setMaxConnTotal(5000)
.setDefaultConnectionConfig(connectionConfig)
.setTlsSocketStrategy(new DefaultClientTlsStrategy(SSLContext.getDefault()))
.build();
connectionManager.setDefaultSocketConfig(SocketConfig.custom()
.setSoLinger(TimeValue.ofSeconds(-1))
.setSoTimeout(Timeout.ofSeconds(10))
.build()
);
Thread.ofPlatform().daemon(true).start(() -> {
try {
for (;;) {
TimeUnit.SECONDS.sleep(15);
logger.info("Connection pool stats: {}", connectionManager.getTotalStats());
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
final RequestConfig defaultRequestConfig = RequestConfig.custom()
.setCookieSpec(StandardCookieSpec.RELAXED)
.setResponseTimeout(10, TimeUnit.SECONDS)
.setConnectionRequestTimeout(5, TimeUnit.MINUTES)
.build();
return HttpClients.custom()
.setDefaultCookieStore(cookies)
.setConnectionManager(connectionManager)
.setRetryStrategy(this)
.setKeepAliveStrategy(new ConnectionKeepAliveStrategy() {
// Default keep-alive duration is 3 minutes, but this is too long for us,
// as we are either going to re-use it fairly quickly or close it for a long time.
//
// So we set it to 30 seconds or clamp the server-provided value to a minimum of 10 seconds.
private static final TimeValue defaultValue = TimeValue.ofSeconds(30);
@Override
public TimeValue getKeepAliveDuration(HttpResponse response, HttpContext context) {
final Iterator<HeaderElement> it = MessageSupport.iterate(response, HeaderElements.KEEP_ALIVE);
while (it.hasNext()) {
final HeaderElement he = it.next();
final String param = he.getName();
final String value = he.getValue();
if (value == null)
continue;
if (!"timeout".equalsIgnoreCase(param))
continue;
try {
long timeout = Long.parseLong(value);
timeout = Math.clamp(timeout, 30, defaultValue.toSeconds());
return TimeValue.ofSeconds(timeout);
} catch (final NumberFormatException ignore) {
break;
}
}
return defaultValue;
}
})
.disableRedirectHandling()
.setDefaultRequestConfig(defaultRequestConfig)
.build();
}
@Override
public Cookies getCookies() {
public CookieStore getCookies() {
return cookies;
}
@@ -76,19 +177,27 @@ public class HttpFetcherImpl implements HttpFetcher {
@Inject
public HttpFetcherImpl(UserAgent userAgent)
{
this.client = createClient();
try {
this.client = createClient();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
this.userAgentString = userAgent.uaString();
this.userAgentIdentifier = userAgent.uaIdentifier();
}
public HttpFetcherImpl(String userAgent) {
this.client = createClient();
try {
this.client = createClient();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
this.userAgentString = userAgent;
this.userAgentIdentifier = userAgent;
}
// Not necessary in prod, but useful in test
public void close() {
public void close() throws IOException {
client.close();
}
@@ -101,30 +210,94 @@ public class HttpFetcherImpl implements HttpFetcher {
*/
@Override
public DomainProbeResult probeDomain(EdgeUrl url) {
HttpRequest head;
try {
head = HttpRequest.newBuilder()
.HEAD()
.uri(url.asURI())
.header("User-agent", userAgentString)
.timeout(requestTimeout)
.build();
} catch (URISyntaxException e) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid URL");
}
List<EdgeUrl> urls = new ArrayList<>();
urls.add(url);
try {
var rsp = client.send(head, HttpResponse.BodyHandlers.discarding());
EdgeUrl rspUri = new EdgeUrl(rsp.uri());
int redirects = 0;
AtomicBoolean tryGet = new AtomicBoolean(false);
if (!Objects.equals(rspUri.domain, url.domain)) {
return new DomainProbeResult.Redirect(rspUri.domain);
while (!urls.isEmpty() && ++redirects < 5) {
ClassicHttpRequest request;
EdgeUrl topUrl = urls.removeFirst();
try {
if (tryGet.get()) {
request = ClassicRequestBuilder.get(topUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.addHeader("Range", "bytes=0-255")
.build();
} else {
request = ClassicRequestBuilder.head(topUrl.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.build();
}
} catch (URISyntaxException e) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid URL");
}
return new DomainProbeResult.Ok(rspUri);
}
catch (Exception ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, ex.getMessage());
try {
var result = SendLock.wrapSend(client, request, response -> {
EntityUtils.consume(response.getEntity());
return switch (response.getCode()) {
case 200 -> new DomainProbeResult.Ok(url);
case 405 -> {
if (!tryGet.get()) {
tryGet.set(true);
yield new DomainProbeResult.RedirectSameDomain_Internal(url);
}
else {
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "HTTP status 405, tried HEAD and GET?!");
}
}
case 301, 302, 307 -> {
var location = response.getFirstHeader("Location");
if (location != null) {
Optional<EdgeUrl> newUrl = linkParser.parseLink(topUrl, location.getValue());
if (newUrl.isEmpty()) {
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid location header on redirect");
}
EdgeUrl newEdgeUrl = newUrl.get();
if (newEdgeUrl.domain.equals(topUrl.domain)) {
yield new DomainProbeResult.RedirectSameDomain_Internal(newEdgeUrl);
}
else {
yield new DomainProbeResult.Redirect(newEdgeUrl.domain);
}
}
yield new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "No location header on redirect");
}
default ->
new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "HTTP status " + response.getCode());
};
});
if (result instanceof DomainProbeResult.RedirectSameDomain_Internal(EdgeUrl redirUrl)) {
urls.add(redirUrl);
}
else {
return result;
}
// We don't have robots.txt yet, so we'll assume a request delay of 1 second
TimeUnit.SECONDS.sleep(1);
}
catch (SocketTimeoutException ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Timeout during domain probe");
}
catch (Exception ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Error during domain probe");
}
}
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Failed to resolve domain root");
}
/** Perform a HEAD request to fetch the content type of a URL.
@@ -135,70 +308,73 @@ public class HttpFetcherImpl implements HttpFetcher {
* recorded in the WARC file on failure.
*/
public ContentTypeProbeResult probeContentType(EdgeUrl url,
WarcRecorder warcRecorder,
ContentTags tags) throws RateLimitException {
if (tags.isEmpty() && contentTypeLogic.isUrlLikeBinary(url)) {
try {
var headBuilder = HttpRequest.newBuilder()
.HEAD()
.uri(url.asURI())
.header("User-agent", userAgentString)
.header("Accept-Encoding", "gzip")
.timeout(requestTimeout)
;
var rsp = client.send(headBuilder.build(), HttpResponse.BodyHandlers.discarding());
var headers = rsp.headers();
var contentTypeHeader = headers.firstValue("Content-Type").orElse(null);
if (contentTypeHeader != null && !contentTypeLogic.isAllowableContentType(contentTypeHeader)) {
warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.statusCode());
return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.statusCode());
}
// Update the URL to the final URL of the HEAD request, otherwise we might end up doing
// HEAD 301 url1 -> url2
// HEAD 200 url2
// GET 301 url1 -> url2
// GET 200 url2
// which is not what we want. Overall we want to do as few requests as possible to not raise
// too many eyebrows when looking at the logs on the target server. Overall it's probably desirable
// that it looks like the traffic makes sense, as opposed to looking like a broken bot.
var redirectUrl = new EdgeUrl(rsp.uri());
EdgeUrl ret;
if (Objects.equals(redirectUrl.domain, url.domain)) ret = redirectUrl;
else ret = url;
// Intercept rate limiting
if (rsp.statusCode() == 429) {
throw new HttpFetcherImpl.RateLimitException(headers.firstValue("Retry-After").orElse("1"));
}
return new ContentTypeProbeResult.Ok(ret);
}
catch (HttpTimeoutException ex) {
warcRecorder.flagAsTimeout(url);
return new ContentTypeProbeResult.Timeout(ex);
}
catch (RateLimitException ex) {
throw ex;
}
catch (Exception ex) {
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
warcRecorder.flagAsError(url, ex);
return new ContentTypeProbeResult.Exception(ex);
}
DomainCookies cookies,
CrawlDelayTimer timer,
ContentTags tags) {
if (!tags.isEmpty() || !contentTypeLogic.isUrlLikeBinary(url)) {
return new ContentTypeProbeResult.NoOp();
}
try {
ClassicHttpRequest head = ClassicRequestBuilder.head(url.asURI())
.addHeader("User-Agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.build();
cookies.paintRequest(head);
return SendLock.wrapSend(client, head, (rsp) -> {
cookies.updateCookieStore(rsp);
EntityUtils.consume(rsp.getEntity());
int statusCode = rsp.getCode();
// Handle redirects
if (statusCode == 301 || statusCode == 302 || statusCode == 307) {
var location = rsp.getFirstHeader("Location");
if (location != null) {
Optional<EdgeUrl> newUrl = linkParser.parseLink(url, location.getValue());
if (newUrl.isEmpty())
return new ContentTypeProbeResult.HttpError(statusCode, "Invalid location header on redirect");
return new ContentTypeProbeResult.Redirect(newUrl.get());
}
}
if (statusCode == 405) {
// If we get a 405, we can't probe the content type with HEAD, so we'll just say it's ok
return new ContentTypeProbeResult.Ok(url);
}
// Handle errors
if (statusCode < 200 || statusCode > 300) {
return new ContentTypeProbeResult.HttpError(statusCode, "Bad status code");
}
// Handle missing content type
var ctHeader = rsp.getFirstHeader("Content-Type");
if (ctHeader == null) {
return new ContentTypeProbeResult.HttpError(statusCode, "Missing Content-Type header");
}
var contentType = ctHeader.getValue();
// Check if the content type is allowed
if (contentTypeLogic.isAllowableContentType(contentType)) {
return new ContentTypeProbeResult.Ok(url);
} else {
return new ContentTypeProbeResult.BadContentType(contentType, statusCode);
}
});
}
catch (SocketTimeoutException ex) {
return new ContentTypeProbeResult.Timeout(ex);
}
catch (Exception ex) {
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
return new ContentTypeProbeResult.Exception(ex);
}
finally {
timer.waitFetchDelay();
}
return new ContentTypeProbeResult.Ok(url);
}
/** Fetch the content of a URL, and record it in a WARC file,
@@ -208,37 +384,76 @@ public class HttpFetcherImpl implements HttpFetcher {
@Override
public HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder warcRecorder,
DomainCookies cookies,
CrawlDelayTimer timer,
ContentTags contentTags,
ProbeType probeType)
throws Exception
{
var getBuilder = HttpRequest.newBuilder()
.GET()
.uri(url.asURI())
.header("User-agent", userAgentString)
.header("Accept-Encoding", "gzip")
.header("Accept-Language", "en,*;q=0.5")
.header("Accept", "text/html, application/xhtml+xml, text/*;q=0.8")
.timeout(requestTimeout)
;
try {
if (probeType == HttpFetcher.ProbeType.FULL) {
try {
var probeResult = probeContentType(url, cookies, timer, contentTags);
logger.info(crawlerAuditMarker, "Probe result {} for {}", probeResult.getClass().getSimpleName(), url);
switch (probeResult) {
case HttpFetcher.ContentTypeProbeResult.NoOp():
break; //
case HttpFetcher.ContentTypeProbeResult.Ok(EdgeUrl resolvedUrl):
url = resolvedUrl; // If we were redirected while probing, use the final URL for fetching
break;
case ContentTypeProbeResult.BadContentType badContentType:
warcRecorder.flagAsFailedContentTypeProbe(url, badContentType.contentType(), badContentType.statusCode());
return new HttpFetchResult.ResultNone();
case ContentTypeProbeResult.BadContentType.Timeout(Exception ex):
warcRecorder.flagAsTimeout(url);
return new HttpFetchResult.ResultException(ex);
case ContentTypeProbeResult.Exception(Exception ex):
warcRecorder.flagAsError(url, ex);
return new HttpFetchResult.ResultException(ex);
case ContentTypeProbeResult.HttpError httpError:
return new HttpFetchResult.ResultException(new HttpException("HTTP status code " + httpError.statusCode() + ": " + httpError.message()));
case ContentTypeProbeResult.Redirect redirect:
return new HttpFetchResult.ResultRedirect(redirect.location());
}
} catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
contentTags.paint(getBuilder);
HttpFetchResult result = warcRecorder.fetch(client, getBuilder.build());
if (result instanceof HttpFetchResult.ResultOk ok) {
if (ok.statusCode() == 429) {
throw new RateLimitException(Objects.requireNonNullElse(ok.header("Retry-After"), "1"));
}
if (ok.statusCode() == 304) {
return new HttpFetchResult.Result304Raw();
}
if (ok.statusCode() == 200) {
return ok;
HttpGet request = new HttpGet(url.asURI());
request.addHeader("User-Agent", userAgentString);
request.addHeader("Accept-Encoding", "gzip");
request.addHeader("Accept-Language", "en,*;q=0.5");
request.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8");
contentTags.paint(request);
try (var sl = new SendLock()) {
HttpFetchResult result = warcRecorder.fetch(client, cookies, request);
if (result instanceof HttpFetchResult.ResultOk ok) {
if (ok.statusCode() == 304) {
return new HttpFetchResult.Result304Raw();
}
}
switch (result) {
case HttpFetchResult.ResultOk ok -> logger.info(crawlerAuditMarker, "Fetch result OK {} for {}", ok.statusCode(), url);
case HttpFetchResult.ResultRedirect redirect -> logger.info(crawlerAuditMarker, "Fetch result redirect: {} for {}", redirect.url(), url);
case HttpFetchResult.ResultNone none -> logger.info(crawlerAuditMarker, "Fetch result none for {}", url);
case HttpFetchResult.ResultException ex -> logger.error(crawlerAuditMarker, "Fetch result exception for " + url + ": {}", ex.ex());
case HttpFetchResult.Result304Raw raw -> logger.info(crawlerAuditMarker, "Fetch result: 304 Raw for {}", url);
case HttpFetchResult.Result304ReplacedWithReference ref -> logger.info(crawlerAuditMarker, "Fetch result: 304 With reference for {}", url);
}
return result;
}
}
catch (Exception ex) {
ex.printStackTrace();
return new HttpFetchResult.ResultException(ex);
}
return result;
}
@Override
@@ -246,6 +461,7 @@ public class HttpFetcherImpl implements HttpFetcher {
return new SitemapRetriever();
}
/** Recursively fetch sitemaps */
@Override
public List<EdgeUrl> fetchSitemapUrls(String root, CrawlDelayTimer delayTimer) {
try {
@@ -265,7 +481,7 @@ public class HttpFetcherImpl implements HttpFetcher {
while (!sitemapQueue.isEmpty() && ret.size() < 20_000 && ++fetchedSitemaps < 10) {
var head = sitemapQueue.removeFirst();
switch (fetchSitemap(head)) {
switch (fetchSingleSitemap(head)) {
case SitemapResult.SitemapUrls(List<String> urls) -> {
for (var url : urls) {
@@ -301,62 +517,66 @@ public class HttpFetcherImpl implements HttpFetcher {
}
private SitemapResult fetchSitemap(EdgeUrl sitemapUrl) throws URISyntaxException, IOException, InterruptedException {
HttpRequest getRequest = HttpRequest.newBuilder()
.GET()
.uri(sitemapUrl.asURI())
.header("Accept-Encoding", "gzip")
.header("Accept", "text/*, */*;q=0.9")
.header("User-agent", userAgentString)
.timeout(requestTimeout)
.build();
private SitemapResult fetchSingleSitemap(EdgeUrl sitemapUrl) throws URISyntaxException {
HttpGet getRequest = new HttpGet(sitemapUrl.asURI());
var response = client.send(getRequest, HttpResponse.BodyHandlers.ofInputStream());
if (response.statusCode() != 200) {
return new SitemapResult.SitemapError();
getRequest.addHeader("User-Agent", userAgentString);
getRequest.addHeader("Accept-Encoding", "gzip");
getRequest.addHeader("Accept", "text/*, */*;q=0.9");
getRequest.addHeader("User-Agent", userAgentString);
try (var sl = new SendLock()) {
return client.execute(getRequest, response -> {
try {
if (response.getCode() != 200) {
return new SitemapResult.SitemapError();
}
Document parsedSitemap = Jsoup.parse(
EntityUtils.toString(response.getEntity()),
sitemapUrl.toString(),
Parser.xmlParser()
);
if (parsedSitemap.childrenSize() == 0) {
return new SitemapResult.SitemapError();
}
String rootTagName = parsedSitemap.child(0).tagName();
return switch (rootTagName.toLowerCase()) {
case "sitemapindex" -> {
List<String> references = new ArrayList<>();
for (var locTag : parsedSitemap.getElementsByTag("loc")) {
references.add(locTag.text().trim());
}
yield new SitemapResult.SitemapReferences(Collections.unmodifiableList(references));
}
case "urlset" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("url > loc")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
case "rss", "atom" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("link, url")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
default -> new SitemapResult.SitemapError();
};
}
finally {
EntityUtils.consume(response.getEntity());
}
});
}
try (InputStream inputStream = response.body()) {
InputStream parserStream;
if (sitemapUrl.path.endsWith(".gz")) {
parserStream = new GZIPInputStream(inputStream);
}
else {
parserStream = inputStream;
}
Document parsedSitemap = Jsoup.parse(parserStream, "UTF-8", sitemapUrl.toString(), Parser.xmlParser());
if (parsedSitemap.childrenSize() == 0) {
return new SitemapResult.SitemapError();
}
String rootTagName = parsedSitemap.child(0).tagName();
return switch (rootTagName.toLowerCase()) {
case "sitemapindex" -> {
List<String> references = new ArrayList<>();
for (var locTag : parsedSitemap.getElementsByTag("loc")) {
references.add(locTag.text().trim());
}
yield new SitemapResult.SitemapReferences(Collections.unmodifiableList(references));
}
case "urlset" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("url > loc")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
case "rss", "atom" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("link, url")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
default -> new SitemapResult.SitemapError();
};
catch (Exception ex) {
logger.warn("Error while fetching sitemap {}: {} ({})", sitemapUrl, ex.getClass().getSimpleName(), ex.getMessage());
return new SitemapResult.SitemapError();
}
}
@@ -380,16 +600,14 @@ public class HttpFetcherImpl implements HttpFetcher {
}
private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) {
try {
var getRequest = HttpRequest.newBuilder()
.GET()
.uri(url.asURI())
.header("Accept-Encoding", "gzip")
.header("Accept", "text/*, */*;q=0.9")
.header("User-agent", userAgentString)
.timeout(requestTimeout);
try (var sl = new SendLock()) {
HttpFetchResult result = recorder.fetch(client, getRequest.build());
HttpGet request = new HttpGet(url.asURI());
request.addHeader("User-Agent", userAgentString);
request.addHeader("Accept-Encoding", "gzip");
request.addHeader("Accept", "text/*, */*;q=0.9");
HttpFetchResult result = recorder.fetch(client, new DomainCookies(), request);
return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) ->
robotsParser.parseContent(url.toString(),
@@ -403,6 +621,59 @@ public class HttpFetcherImpl implements HttpFetcher {
}
}
@Override
public boolean retryRequest(HttpRequest request, IOException exception, int executionCount, HttpContext context) {
if (exception instanceof SocketTimeoutException) { // Timeouts are not recoverable
return false;
}
if (exception instanceof SSLException) { // SSL exceptions are unlikely to be recoverable
return false;
}
return executionCount <= 3;
}
@Override
public boolean retryRequest(HttpResponse response, int executionCount, HttpContext context) {
return switch (response.getCode()) {
case 500, 503 -> executionCount <= 2;
case 429 -> executionCount <= 3;
default -> false;
};
}
@Override
public TimeValue getRetryInterval(HttpRequest request, IOException exception, int executionCount, HttpContext context) {
return TimeValue.ofSeconds(1);
}
@Override
public TimeValue getRetryInterval(HttpResponse response, int executionCount, HttpContext context) {
int statusCode = response.getCode();
// Give 503 a bit more time
if (statusCode == 503) return TimeValue.ofSeconds(5);
if (statusCode == 429) {
// get the Retry-After header
String retryAfter = response.getFirstHeader("Retry-After").getValue();
if (retryAfter == null) {
return TimeValue.ofSeconds(2);
}
try {
int retryAfterTime = Integer.parseInt(retryAfter);
retryAfterTime = Math.clamp(retryAfterTime, 1, 5);
return TimeValue.ofSeconds(retryAfterTime);
} catch (NumberFormatException e) {
logger.warn("Invalid Retry-After header: {}", retryAfter);
}
}
return TimeValue.ofSeconds(2);
}
public static class RateLimitException extends Exception {
private final String retryAfter;
@@ -423,5 +694,31 @@ public class HttpFetcherImpl implements HttpFetcher {
}
}
}
}
class SendLock implements AutoCloseable {
private static final Semaphore maxConcurrentRequests = new Semaphore(Integer.getInteger("crawler.maxConcurrentRequests", 512));
boolean closed = false;
public SendLock() {
maxConcurrentRequests.acquireUninterruptibly();
}
public static <T> T wrapSend(HttpClient client, final ClassicHttpRequest request,
final HttpClientResponseHandler<? extends T> responseHandler) throws IOException {
try (var lock = new SendLock()) {
return client.execute(request, responseHandler);
}
}
@Override
public void close() {
if (!closed) {
maxConcurrentRequests.release();
closed = true;
}
}
}

View File

@@ -1,15 +1,20 @@
package nu.marginalia.crawl.fetcher.warc;
import org.apache.commons.io.IOUtils;
import org.apache.commons.io.input.BOMInputStream;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.Header;
import org.netpreserve.jwarc.WarcTruncationReason;
import java.io.*;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Map;
import java.util.zip.GZIPInputStream;
import java.time.Duration;
import java.time.Instant;
import java.util.Arrays;
import static nu.marginalia.crawl.fetcher.warc.ErrorBuffer.suppressContentEncoding;
/** Input buffer for temporary storage of a HTTP response
* This may be in-memory or on-disk, at the discretion of
@@ -17,9 +22,9 @@ import java.util.zip.GZIPInputStream;
* */
public abstract class WarcInputBuffer implements AutoCloseable {
protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED;
protected HttpHeaders headers;
protected Header[] headers;
WarcInputBuffer(HttpHeaders headers) {
WarcInputBuffer(Header[] headers) {
this.headers = headers;
}
@@ -31,7 +36,7 @@ public abstract class WarcInputBuffer implements AutoCloseable {
public final WarcTruncationReason truncationReason() { return truncationReason; }
public final HttpHeaders headers() { return headers; }
public final Header[] headers() { return headers; }
/** Create a buffer for a response.
* If the response is small and not compressed, it will be stored in memory.
@@ -39,34 +44,52 @@ public abstract class WarcInputBuffer implements AutoCloseable {
* and suppressed from the headers.
* If an error occurs, a buffer will be created with no content and an error status.
*/
static WarcInputBuffer forResponse(HttpResponse<InputStream> rsp) {
if (rsp == null)
static WarcInputBuffer forResponse(ClassicHttpResponse response,
HttpGet request,
Duration timeLimit) throws IOException {
if (response == null)
return new ErrorBuffer();
var headers = rsp.headers();
try (var is = rsp.body()) {
int contentLength = (int) headers.firstValueAsLong("Content-Length").orElse(-1L);
String contentEncoding = headers.firstValue("Content-Encoding").orElse(null);
var entity = response.getEntity();
if (contentEncoding == null && contentLength > 0 && contentLength < 8192) {
if (null == entity) {
return new ErrorBuffer();
}
InputStream is = null;
try {
is = entity.getContent();
long length = entity.getContentLength();
if (length > 0 && length < 8192) {
// If the content is small and not compressed, we can just read it into memory
return new MemoryBuffer(headers, is, contentLength);
}
else {
return new MemoryBuffer(response.getHeaders(), request, timeLimit, is, (int) length);
} else {
// Otherwise, we unpack it into a file and read it from there
return new FileBuffer(headers, is);
return new FileBuffer(response.getHeaders(), request, timeLimit, is);
}
}
catch (Exception ex) {
return new ErrorBuffer();
finally {
try {
is.skip(Long.MAX_VALUE);
}
catch (IOException e) {
// Ignore the exception
}
finally {
// Close the input stream
IOUtils.closeQuietly(is);
}
}
}
/** Copy an input stream to an output stream, with a maximum size and time limit */
protected void copy(InputStream is, OutputStream os) {
long startTime = System.currentTimeMillis();
protected void copy(InputStream is, HttpGet request, OutputStream os, Duration timeLimit) {
Instant start = Instant.now();
Instant timeout = start.plus(timeLimit);
long size = 0;
byte[] buffer = new byte[8192];
@@ -76,24 +99,105 @@ public abstract class WarcInputBuffer implements AutoCloseable {
while (true) {
try {
Duration remaining = Duration.between(Instant.now(), timeout);
if (remaining.isNegative()) {
truncationReason = WarcTruncationReason.TIME;
// Abort the request if the time limit is exceeded
// so we don't keep the connection open forever or are forced to consume
// the stream to the end
request.abort();
break;
}
int n = is.read(buffer);
if (n < 0) break;
size += n;
os.write(buffer, 0, n);
if (size > WarcRecorder.MAX_SIZE) {
// Even if we've exceeded the max length,
// we keep consuming the stream up until the end or a timeout,
// as closing the stream means resetting the connection, and
// that's generally not desirable.
if (size < WarcRecorder.MAX_SIZE) {
os.write(buffer, 0, n);
}
else if (truncationReason != WarcTruncationReason.LENGTH) {
truncationReason = WarcTruncationReason.LENGTH;
break;
}
if (System.currentTimeMillis() - startTime > WarcRecorder.MAX_TIME) {
truncationReason = WarcTruncationReason.TIME;
break;
}
} catch (IOException e) {
throw new RuntimeException(e);
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
}
}
/** Takes a Content-Range header and checks if it is complete.
* A complete range is one that covers the entire resource.
* For example, "bytes 0-1023/2048" or "bytes 0-1023/*" are complete ranges.
* "bytes 0-1023/2048" is not a complete range.
*/
public boolean isRangeComplete(Header[] headers) {
// Find the Content-Range header
String contentRangeHeader = null;
for (var header : headers) {
if ("Content-Range".equalsIgnoreCase(header.getName())) {
contentRangeHeader = header.getValue();
break;
}
}
// Return true if header is null or empty
if (contentRangeHeader == null || contentRangeHeader.isEmpty()) {
return true;
}
try {
// Content-Range format: "bytes range-start-range-end/size"
// e.g., "bytes 0-1023/2048" or "bytes 0-1023/*"
// Get the part after "bytes "
String[] parts = contentRangeHeader.split(" ", 2);
if (parts.length < 2) {
return false;
}
// Get the range and size parts (e.g., "0-1023/2048")
String rangeAndSize = parts[1];
String[] rangeAndSizeParts = rangeAndSize.split("/", 2);
if (rangeAndSizeParts.length < 2) {
return false;
}
// Get the range (e.g., "0-1023")
String range = rangeAndSizeParts[0];
String[] rangeParts = range.split("-", 2);
if (rangeParts.length < 2) {
return false;
}
// Get the size (e.g., "2048" or "*")
String size = rangeAndSizeParts[1];
// If size is "*", we don't know the total size, so return false
if ("*".equals(size)) {
return false;
}
// Parse as long to handle large files
long rangeStart = Long.parseLong(rangeParts[0]);
long rangeEnd = Long.parseLong(rangeParts[1]);
long totalSize = Long.parseLong(size);
// Check if the range covers the entire resource
return rangeStart == 0 && rangeEnd == totalSize - 1;
} catch (NumberFormatException | ArrayIndexOutOfBoundsException e) {
return false;
}
}
}
@@ -101,7 +205,7 @@ public abstract class WarcInputBuffer implements AutoCloseable {
/** Pseudo-buffer for when we have an error */
class ErrorBuffer extends WarcInputBuffer {
public ErrorBuffer() {
super(HttpHeaders.of(Map.of(), (k,v)->false));
super(new Header[0]);
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
@@ -118,17 +222,29 @@ class ErrorBuffer extends WarcInputBuffer {
@Override
public void close() throws Exception {}
static Header[] suppressContentEncoding(Header[] headers) {
return Arrays.stream(headers).filter(header -> !"Content-Encoding".equalsIgnoreCase(header.getName())).toArray(Header[]::new);
}
}
/** Buffer for when we have the response in memory */
class MemoryBuffer extends WarcInputBuffer {
byte[] data;
public MemoryBuffer(HttpHeaders headers, InputStream responseStream, int size) {
super(headers);
public MemoryBuffer(Header[] headers, HttpGet request, Duration timeLimit, InputStream responseStream, int size) {
super(suppressContentEncoding(headers));
if (!isRangeComplete(headers)) {
truncationReason = WarcTruncationReason.LENGTH;
} else {
truncationReason = WarcTruncationReason.NOT_TRUNCATED;
}
var outputStream = new ByteArrayOutputStream(size);
copy(responseStream, outputStream);
copy(responseStream, request, outputStream, timeLimit);
data = outputStream.toByteArray();
}
@@ -152,40 +268,25 @@ class MemoryBuffer extends WarcInputBuffer {
class FileBuffer extends WarcInputBuffer {
private final Path tempFile;
public FileBuffer(HttpHeaders headers, InputStream responseStream) throws IOException {
public FileBuffer(Header[] headers, HttpGet request, Duration timeLimit, InputStream responseStream) throws IOException {
super(suppressContentEncoding(headers));
if (!isRangeComplete(headers)) {
truncationReason = WarcTruncationReason.LENGTH;
} else {
truncationReason = WarcTruncationReason.NOT_TRUNCATED;
}
this.tempFile = Files.createTempFile("rsp", ".html");
if ("gzip".equalsIgnoreCase(headers.firstValue("Content-Encoding").orElse(""))) {
try (var out = Files.newOutputStream(tempFile)) {
copy(new GZIPInputStream(responseStream), out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
try (var out = Files.newOutputStream(tempFile)) {
copy(responseStream, request, out, timeLimit);
}
else {
try (var out = Files.newOutputStream(tempFile)) {
copy(responseStream, out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
}
private static HttpHeaders suppressContentEncoding(HttpHeaders headers) {
return HttpHeaders.of(headers.map(), (k, v) -> {
if ("Content-Encoding".equalsIgnoreCase(k)) {
return false;
}
return !"Transfer-Encoding".equalsIgnoreCase(k);
});
}
public InputStream read() throws IOException {
return Files.newInputStream(tempFile);
}

View File

@@ -1,6 +1,8 @@
package nu.marginalia.crawl.fetcher.warc;
import org.apache.commons.lang3.StringUtils;
import org.apache.hc.core5.http.ClassicHttpResponse;
import org.apache.hc.core5.http.Header;
import java.net.URI;
import java.net.URLEncoder;
@@ -17,7 +19,7 @@ import java.util.stream.Collectors;
public class WarcProtocolReconstructor {
static String getHttpRequestString(String method,
Map<String, List<String>> mainHeaders,
Header[] mainHeaders,
Map<String, List<String>> extraHeaders,
URI uri) {
StringBuilder requestStringBuilder = new StringBuilder();
@@ -34,12 +36,13 @@ public class WarcProtocolReconstructor {
Set<String> addedHeaders = new HashSet<>();
mainHeaders.forEach((k, values) -> {
for (var value : values) {
addedHeaders.add(k);
requestStringBuilder.append(capitalizeHeader(k)).append(": ").append(value).append("\r\n");
}
});
for (var header : mainHeaders) {
String k = header.getName();
String v = header.getValue();
addedHeaders.add(k);
requestStringBuilder.append(capitalizeHeader(k)).append(": ").append(v).append("\r\n");
}
extraHeaders.forEach((k, values) -> {
if (!addedHeaders.contains(k)) {
@@ -87,6 +90,12 @@ public class WarcProtocolReconstructor {
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
}
static String getResponseHeader(ClassicHttpResponse response, long size) {
String headerString = getHeadersAsString(response.getHeaders(), size);
return response.getVersion().format() + " " + response.getCode() + " " + response.getReasonPhrase() + "\r\n" + headerString + "\r\n\r\n";
}
private static final Map<Integer, String> STATUS_CODE_MAP = Map.ofEntries(
Map.entry(200, "OK"),
Map.entry(201, "Created"),
@@ -149,6 +158,37 @@ public class WarcProtocolReconstructor {
return joiner.toString();
}
static private String getHeadersAsString(Header[] headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n");
for (var header : headers) {
String headerCapitalized = capitalizeHeader(header.getName());
// Omit pseudoheaders injected by the crawler itself
if (headerCapitalized.startsWith("X-Marginalia"))
continue;
// Omit Transfer-Encoding and Content-Encoding headers
if (headerCapitalized.equals("Transfer-Encoding"))
continue;
if (headerCapitalized.equals("Content-Encoding"))
continue;
// Since we're transparently decoding gzip, we need to update the Content-Length header
// to reflect the actual size of the response body. We'll do this at the end.
if (headerCapitalized.equals("Content-Length"))
continue;
joiner.add(headerCapitalized + ": " + header.getValue());
}
joiner.add("Content-Length: " + responseSize);
return joiner.toString();
}
static private String getHeadersAsString(HttpHeaders headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n");

View File

@@ -1,11 +1,16 @@
package nu.marginalia.crawl.fetcher.warc;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.core5.http.NameValuePair;
import org.jetbrains.annotations.Nullable;
import org.netpreserve.jwarc.*;
import org.slf4j.Logger;
@@ -14,10 +19,9 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.net.InetAddress;
import java.net.SocketTimeoutException;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
@@ -48,22 +52,15 @@ public class WarcRecorder implements AutoCloseable {
// Affix a version string in case we need to change the format in the future
// in some way
private final String warcRecorderVersion = "1.0";
private final Cookies cookies;
private final LinkParser linkParser = new LinkParser();
/**
* Create a new WarcRecorder that will write to the given file
*
* @param warcFile The file to write to
*/
public WarcRecorder(Path warcFile, HttpFetcherImpl fetcher) throws IOException {
public WarcRecorder(Path warcFile) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = fetcher.getCookies();
}
public WarcRecorder(Path warcFile, Cookies cookies) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = cookies;
}
/**
@@ -73,16 +70,25 @@ public class WarcRecorder implements AutoCloseable {
public WarcRecorder() throws IOException {
this.warcFile = Files.createTempFile("warc", ".warc.gz");
this.writer = new WarcWriter(this.warcFile);
this.cookies = new Cookies();
temporaryFile = true;
}
public HttpFetchResult fetch(HttpClient client,
java.net.http.HttpRequest request)
DomainCookies cookies,
HttpGet request)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{
URI requestUri = request.uri();
return fetch(client, cookies, request, Duration.ofMillis(MAX_TIME));
}
public HttpFetchResult fetch(HttpClient client,
DomainCookies cookies,
HttpGet request,
Duration timeout)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{
URI requestUri = request.getUri();
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
@@ -90,121 +96,151 @@ public class WarcRecorder implements AutoCloseable {
Instant date = Instant.now();
// Not entirely sure why we need to do this, but keeping it due to Chesterton's Fence
Map<String, List<String>> extraHeaders = new HashMap<>(request.headers().map());
Map<String, List<String>> extraHeaders = new HashMap<>(request.getHeaders().length);
HttpResponse<InputStream> response;
// Inject a range header to attempt to limit the size of the response
// to the maximum size we want to store, if the server supports it.
request.addHeader("Range", "bytes=0-"+MAX_SIZE);
cookies.paintRequest(request);
try {
response = client.send(request, java.net.http.HttpResponse.BodyHandlers.ofInputStream());
}
catch (Exception ex) {
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return client.execute(request,response -> {
try (WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response, request, timeout);
InputStream inputStream = inputBuffer.read()) {
cookies.updateCookieStore(response);
// Build and write the request
WarcDigestBuilder requestDigestBuilder = new WarcDigestBuilder();
byte[] httpRequestString = WarcProtocolReconstructor
.getHttpRequestString(
request.getMethod(),
request.getHeaders(),
extraHeaders,
requestUri)
.getBytes();
requestDigestBuilder.update(httpRequestString);
WarcRequest warcRequest = new WarcRequest.Builder(requestUri)
.blockDigest(requestDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_REQUEST, httpRequestString)
.build();
warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcRequest);
if (cookies.hasCookies()) {
response.addHeader("X-Has-Cookies", 1);
}
byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length);
responseDataBuffer.put(responseHeaders);
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
int dataStart = responseDataBuffer.pos();
for (;;) {
int remainingLength = responseDataBuffer.remaining();
if (remainingLength == 0)
break;
int startPos = responseDataBuffer.pos();
int n = responseDataBuffer.readFrom(inputStream, remainingLength);
if (n < 0)
break;
responseDataBuffer.updateDigest(responseDigestBuilder, startPos, n);
responseDataBuffer.updateDigest(payloadDigestBuilder, startPos, n);
}
// with some http client libraries, that resolve redirects transparently, this might be different
// from the request URI, but currently we don't have transparent redirect resolution so it's always
// the same (though let's keep the variables separate in case this changes)
final URI responseUri = requestUri;
WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri)
.blockDigest(responseDigestBuilder.build())
.date(date)
.concurrentTo(warcRequest.id())
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
InetAddress inetAddress = InetAddress.getByName(responseUri.getHost());
responseBuilder.ipAddress(inetAddress);
responseBuilder.payloadDigest(payloadDigestBuilder.build());
responseBuilder.truncated(inputBuffer.truncationReason());
// Build and write the response
var warcResponse = responseBuilder.build();
warcResponse.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcResponse);
if (Duration.between(date, Instant.now()).compareTo(Duration.ofSeconds(9)) > 0
&& inputBuffer.size() < 2048
&& !requestUri.getPath().endsWith("robots.txt")) // don't bail on robots.txt
{
// Fast detection and mitigation of crawler traps that respond with slow
// small responses, with a high branching factor
// Note we bail *after* writing the warc records, this will effectively only
// prevent link extraction from the document.
logger.warn("URL {} took too long to fetch ({}s) and was too small for the effort ({}b)",
requestUri,
Duration.between(date, Instant.now()).getSeconds(),
inputBuffer.size()
);
return new HttpFetchResult.ResultException(new IOException("Likely crawler trap"));
}
if (response.getCode() == 301 || response.getCode() == 302 || response.getCode() == 307) {
// If the server responds with a redirect, we need to
// update the request URI to the new location
EdgeUrl redirectLocation = Optional.ofNullable(response.getFirstHeader("Location"))
.map(NameValuePair::getValue)
.flatMap(location -> linkParser.parseLink(new EdgeUrl(requestUri), location))
.orElse(null);
if (redirectLocation != null) {
// If the redirect location is a valid URL, we need to update the request URI
return new HttpFetchResult.ResultRedirect(redirectLocation);
} else {
// If the redirect location is not a valid URL, we need to throw an exception
return new HttpFetchResult.ResultException(new IOException("Invalid redirect location: " + response.getFirstHeader("Location")));
}
}
return new HttpFetchResult.ResultOk(responseUri,
response.getCode(),
inputBuffer.headers(),
inetAddress.getHostAddress(),
responseDataBuffer.data,
dataStart,
responseDataBuffer.length() - dataStart);
} catch (Exception ex) {
flagAsError(new EdgeUrl(requestUri), ex); // write a WARC record to indicate the error
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex);
}
});
// the client.execute() method will throw an exception if the request times out
// or on other IO exceptions, so we need to catch those here as well as having
// exception handling in the response handler
} catch (SocketTimeoutException ex) {
flagAsTimeout(new EdgeUrl(requestUri)); // write a WARC record to indicate the timeout
return new HttpFetchResult.ResultException(ex);
}
try (WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response);
InputStream inputStream = inputBuffer.read())
{
if (cookies.hasCookies()) {
extraHeaders.put("X-Has-Cookies", List.of("1"));
}
byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length);
responseDataBuffer.put(responseHeaders);
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
int dataStart = responseDataBuffer.pos();
for (;;) {
int remainingLength = responseDataBuffer.remaining();
if (remainingLength == 0)
break;
int startPos = responseDataBuffer.pos();
int n = responseDataBuffer.readFrom(inputStream, remainingLength);
if (n < 0)
break;
responseDataBuffer.updateDigest(responseDigestBuilder, startPos, n);
responseDataBuffer.updateDigest(payloadDigestBuilder, startPos, n);
}
// It looks like this might be the same as requestUri, but it's not;
// it's the URI after resolving redirects.
final URI responseUri = response.uri();
WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri)
.blockDigest(responseDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
InetAddress inetAddress = InetAddress.getByName(responseUri.getHost());
responseBuilder.ipAddress(inetAddress);
responseBuilder.payloadDigest(payloadDigestBuilder.build());
responseBuilder.truncated(inputBuffer.truncationReason());
// Build and write the response
var warcResponse = responseBuilder.build();
warcResponse.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcResponse);
// Build and write the request
WarcDigestBuilder requestDigestBuilder = new WarcDigestBuilder();
byte[] httpRequestString = WarcProtocolReconstructor
.getHttpRequestString(
response.request().method(),
response.request().headers().map(),
extraHeaders,
requestUri)
.getBytes();
requestDigestBuilder.update(httpRequestString);
WarcRequest warcRequest = new WarcRequest.Builder(requestUri)
.blockDigest(requestDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_REQUEST, httpRequestString)
.concurrentTo(warcResponse.id())
.build();
warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcRequest);
if (Duration.between(date, Instant.now()).compareTo(Duration.ofSeconds(9)) > 0
&& inputBuffer.size() < 2048
&& !request.uri().getPath().endsWith("robots.txt")) // don't bail on robots.txt
{
// Fast detection and mitigation of crawler traps that respond with slow
// small responses, with a high branching factor
// Note we bail *after* writing the warc records, this will effectively only
// prevent link extraction from the document.
logger.warn("URL {} took too long to fetch ({}s) and was too small for the effort ({}b)",
requestUri,
Duration.between(date, Instant.now()).getSeconds(),
inputBuffer.size()
);
return new HttpFetchResult.ResultException(new IOException("Likely crawler trap"));
}
return new HttpFetchResult.ResultOk(responseUri,
response.statusCode(),
inputBuffer.headers(),
inetAddress.getHostAddress(),
responseDataBuffer.data,
dataStart,
responseDataBuffer.length() - dataStart);
}
catch (Exception ex) {
} catch (IOException ex) {
flagAsError(new EdgeUrl(requestUri), ex); // write a WARC record to indicate the error
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex);
}
@@ -214,7 +250,7 @@ public class WarcRecorder implements AutoCloseable {
writer.write(item);
}
private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags contentTags) {
private void saveOldResponse(EdgeUrl url, DomainCookies domainCookies, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags contentTags) {
try {
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
@@ -275,7 +311,7 @@ public class WarcRecorder implements AutoCloseable {
.date(Instant.now())
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
if (cookies.hasCookies()) {
if (domainCookies.hasCookies() || (headers != null && headers.contains("Set-Cookie:"))) {
builder.addHeader("X-Has-Cookies", "1");
}
@@ -295,8 +331,8 @@ public class WarcRecorder implements AutoCloseable {
* an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this
* scenario we want to record the data as it was in the previous crawl, but not re-fetch it.
*/
public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags ctags) {
saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags);
public void writeReferenceCopy(EdgeUrl url, DomainCookies cookies, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags ctags) {
saveOldResponse(url, cookies, contentType, statusCode, documentBody, headers, ctags);
}
public void writeWarcinfoHeader(String ip, EdgeDomain domain, HttpFetcherImpl.DomainProbeResult result) throws IOException {
@@ -316,6 +352,9 @@ public class WarcRecorder implements AutoCloseable {
case HttpFetcherImpl.DomainProbeResult.Ok ok:
fields.put("X-WARC-Probe-Status", List.of("OK"));
break;
case HttpFetcher.DomainProbeResult.RedirectSameDomain_Internal redirectSameDomain:
fields.put("X-WARC-Probe-Status", List.of("REDIR-INTERNAL"));
break;
}
var warcinfo = new Warcinfo.Builder()

View File

@@ -3,6 +3,7 @@ package nu.marginalia.crawl.logic;
import nu.marginalia.model.EdgeDomain;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Semaphore;
@@ -19,8 +20,21 @@ public class DomainLocks {
* and may be held by another thread. The caller is responsible for locking and releasing the lock.
*/
public DomainLock lockDomain(EdgeDomain domain) throws InterruptedException {
return new DomainLock(domain.toString(),
var ret = new DomainLock(domain.toString(),
locks.computeIfAbsent(domain.topDomain.toLowerCase(), this::defaultPermits));
ret.lock();
return ret;
}
public Optional<DomainLock> tryLockDomain(EdgeDomain domain) {
var sem = locks.computeIfAbsent(domain.topDomain.toLowerCase(), this::defaultPermits);
if (sem.tryAcquire(1)) {
return Optional.of(new DomainLock(domain.toString(), sem));
}
else {
// We don't have a lock, so we return an empty optional
return Optional.empty();
}
}
private Semaphore defaultPermits(String topDomain) {
@@ -44,14 +58,25 @@ public class DomainLocks {
return new Semaphore(2);
}
public boolean canLock(EdgeDomain domain) {
Semaphore sem = locks.get(domain.topDomain.toLowerCase());
if (null == sem)
return true;
else
return sem.availablePermits() > 0;
}
public static class DomainLock implements AutoCloseable {
private final String domainName;
private final Semaphore semaphore;
DomainLock(String domainName, Semaphore semaphore) throws InterruptedException {
DomainLock(String domainName, Semaphore semaphore) {
this.domainName = domainName;
this.semaphore = semaphore;
}
// This method is called to lock the domain. It will block until the lock is available.
private void lock() throws InterruptedException {
Thread.currentThread().setName("crawling:" + domainName + " [await domain lock]");
semaphore.acquire();
Thread.currentThread().setName("crawling:" + domainName);

View File

@@ -3,6 +3,7 @@ package nu.marginalia.crawl.retreival;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import java.time.Duration;
import java.util.concurrent.ThreadLocalRandom;
import static java.lang.Math.max;
import static java.lang.Math.min;
@@ -50,15 +51,20 @@ public class CrawlDelayTimer {
waitFetchDelay(0);
}
public void waitFetchDelay(Duration spentTime) {
waitFetchDelay(spentTime.toMillis());
}
public void waitFetchDelay(long spentTime) {
long sleepTime = delayTime;
long jitter = ThreadLocalRandom.current().nextLong(0, 150);
try {
if (sleepTime >= 1) {
if (spentTime > sleepTime)
return;
Thread.sleep(min(sleepTime - spentTime, 5000));
Thread.sleep(min(sleepTime - spentTime, 5000) + jitter);
} else {
// When no crawl delay is specified, lean toward twice the fetch+process time,
// within sane limits. This means slower servers get slower crawling, and faster
@@ -71,17 +77,17 @@ public class CrawlDelayTimer {
if (spentTime > sleepTime)
return;
Thread.sleep(sleepTime - spentTime);
Thread.sleep(sleepTime - spentTime + jitter);
}
if (slowDown) {
// Additional delay when the server is signalling it wants slower requests
Thread.sleep(DEFAULT_CRAWL_DELAY_MIN_MS);
Thread.sleep(DEFAULT_CRAWL_DELAY_MIN_MS + jitter);
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException();
throw new RuntimeException("Interrupted", e);
}
}
}

View File

@@ -0,0 +1,42 @@
package nu.marginalia.crawl.retreival;
import java.time.Duration;
import java.time.Instant;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
/**
* This class is used to stagger the rate at which connections are created.
* <p></p>
* It is used to ensure that we do not create too many connections at once,
* which can lead to network congestion and other issues. Since the connections
* tend to be very long-lived, we can afford to wait a bit before creating the next
* even if it adds a bit of build-up time when the crawl starts.
*/
public class CrawlerConnectionThrottle {
private Instant lastCrawlStart = Instant.EPOCH;
private final Semaphore launchSemaphore = new Semaphore(1);
private final Duration launchInterval;
public CrawlerConnectionThrottle(Duration launchInterval) {
this.launchInterval = launchInterval;
}
public void waitForConnectionPermission() throws InterruptedException {
try {
launchSemaphore.acquire();
Instant nextPermittedLaunch = lastCrawlStart.plus(launchInterval);
if (nextPermittedLaunch.isAfter(Instant.now())) {
long waitTime = Duration.between(Instant.now(), nextPermittedLaunch).toMillis();
TimeUnit.MILLISECONDS.sleep(waitTime);
}
lastCrawlStart = Instant.now();
}
finally {
launchSemaphore.release();
}
}
}

View File

@@ -6,8 +6,8 @@ import nu.marginalia.contenttype.ContentType;
import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.logic.LinkFilterSelector;
import nu.marginalia.crawl.retreival.revisit.CrawlerRevisitor;
@@ -26,14 +26,16 @@ import java.io.IOException;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.nio.file.Path;
import java.time.Duration;
import java.time.Instant;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
public class CrawlerRetreiver implements AutoCloseable {
private static final int MAX_ERRORS = 20;
private static final int HTTP_429_RETRY_LIMIT = 1; // Retry 429s once
private final HttpFetcher fetcher;
@@ -50,6 +52,11 @@ public class CrawlerRetreiver implements AutoCloseable {
private final DomainStateDb domainStateDb;
private final WarcRecorder warcRecorder;
private final CrawlerRevisitor crawlerRevisitor;
private final DomainCookies cookies = new DomainCookies();
private static final CrawlerConnectionThrottle connectionThrottle = new CrawlerConnectionThrottle(
Duration.ofSeconds(1) // pace the connections to avoid network congestion at startup
);
int errorCount = 0;
@@ -90,6 +97,11 @@ public class CrawlerRetreiver implements AutoCloseable {
public int crawlDomain(DomainLinks domainLinks, CrawlDataReference oldCrawlData) {
try (oldCrawlData) {
// Wait for permission to open a connection to avoid network congestion
// from hundreds/thousands of TCP handshakes
connectionThrottle.waitForConnectionPermission();
// Do an initial domain probe to determine the root URL
var probeResult = probeRootUrl();
@@ -108,15 +120,24 @@ public class CrawlerRetreiver implements AutoCloseable {
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(probedUrl, delayTimer);
domainStateDb.save(summaryRecord);
if (Thread.interrupted()) {
// There's a small chance we're interrupted during the sniffing portion
throw new InterruptedException();
}
Instant recrawlStart = Instant.now();
CrawlerRevisitor.RecrawlMetadata recrawlMetadata = crawlerRevisitor.recrawl(oldCrawlData, cookies, robotsRules, delayTimer);
Duration recrawlTime = Duration.between(recrawlStart, Instant.now());
// Play back the old crawl data (if present) and fetch the documents comparing etags and last-modified
if (crawlerRevisitor.recrawl(oldCrawlData, robotsRules, delayTimer) > 0) {
if (recrawlMetadata.size() > 0) {
// If we have reference data, we will always grow the crawl depth a bit
crawlFrontier.increaseDepth(1.5, 2500);
}
oldCrawlData.close(); // proactively close the crawl data reference here to not hold onto expensive resources
yield crawlDomain(probedUrl, robotsRules, delayTimer, domainLinks);
yield crawlDomain(probedUrl, robotsRules, delayTimer, domainLinks, recrawlMetadata, recrawlTime);
}
case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString()));
@@ -126,6 +147,10 @@ public class CrawlerRetreiver implements AutoCloseable {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc));
yield 1;
}
default -> {
logger.error("Unexpected domain probe result {}", probeResult);
yield 1;
}
};
}
@@ -138,17 +163,29 @@ public class CrawlerRetreiver implements AutoCloseable {
private int crawlDomain(EdgeUrl rootUrl,
SimpleRobotRules robotsRules,
CrawlDelayTimer delayTimer,
DomainLinks domainLinks) {
DomainLinks domainLinks,
CrawlerRevisitor.RecrawlMetadata recrawlMetadata,
Duration recrawlTime) {
Instant crawlStart = Instant.now();
// Add external links to the crawl frontier
crawlFrontier.addAllToQueue(domainLinks.getUrls(rootUrl.proto));
// Fetch sitemaps
for (var sitemap : robotsRules.getSitemaps()) {
crawlFrontier.addAllToQueue(fetcher.fetchSitemapUrls(sitemap, delayTimer));
// Validate the sitemap URL and check if it belongs to the domain as the root URL
if (EdgeUrl.parse(sitemap)
.map(url -> url.getDomain().equals(rootUrl.domain))
.orElse(false)) {
crawlFrontier.addAllToQueue(fetcher.fetchSitemapUrls(sitemap, delayTimer));
}
}
int crawlerAdditions = 0;
while (!crawlFrontier.isEmpty()
&& !crawlFrontier.isCrawlDepthReached()
&& errorCount < MAX_ERRORS
@@ -180,7 +217,11 @@ public class CrawlerRetreiver implements AutoCloseable {
continue;
try {
fetchContentWithReference(top, delayTimer, DocumentWithReference.empty());
var result = fetchContentWithReference(top, delayTimer, DocumentWithReference.empty());
if (result.isOk()) {
crawlerAdditions++;
}
}
catch (InterruptedException ex) {
Thread.currentThread().interrupt();
@@ -188,6 +229,17 @@ public class CrawlerRetreiver implements AutoCloseable {
}
}
Duration crawlTime = Duration.between(crawlStart, Instant.now());
domainStateDb.save(new DomainStateDb.CrawlMeta(
domain,
Instant.now(),
recrawlTime,
crawlTime,
recrawlMetadata.errors(),
crawlerAdditions,
recrawlMetadata.size() + crawlerAdditions
));
return crawlFrontier.visitedSize();
}
@@ -216,17 +268,29 @@ public class CrawlerRetreiver implements AutoCloseable {
return domainProbeResult;
}
private DomainStateDb.SummaryRecord sniffRootDocument(EdgeUrl rootUrl, CrawlDelayTimer timer) {
Optional<String> feedLink = Optional.empty();
try {
var url = rootUrl.withPathAndParam("/", null);
HttpFetchResult result = fetchWithRetry(url, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty());
HttpFetchResult result = fetcher.fetchContent(url, warcRecorder, cookies, timer, ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
timer.waitFetchDelay(0);
if (!(result instanceof HttpFetchResult.ResultOk ok))
if (result instanceof HttpFetchResult.ResultRedirect(EdgeUrl location)) {
if (Objects.equals(location.domain, url.domain)) {
// TODO: Follow the redirect to the new location and sniff the document
crawlFrontier.addFirst(location);
}
return DomainStateDb.SummaryRecord.forSuccess(domain);
}
if (!(result instanceof HttpFetchResult.ResultOk ok)) {
return DomainStateDb.SummaryRecord.forSuccess(domain);
}
var optDoc = ok.parseDocument();
if (optDoc.isEmpty())
@@ -275,7 +339,7 @@ public class CrawlerRetreiver implements AutoCloseable {
// Grab the favicon if it exists
if (fetchWithRetry(faviconUrl, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty()) instanceof HttpFetchResult.ResultOk iconResult) {
if (fetcher.fetchContent(faviconUrl, warcRecorder, cookies, timer, ContentTags.empty(), HttpFetcher.ProbeType.DISABLED) instanceof HttpFetchResult.ResultOk iconResult) {
String contentType = iconResult.header("Content-Type");
byte[] iconData = iconResult.getBodyBytes();
@@ -289,6 +353,10 @@ public class CrawlerRetreiver implements AutoCloseable {
}
catch (Exception ex) {
logger.error("Error configuring link filter", ex);
if (Thread.interrupted()) {
Thread.currentThread().interrupt();
return DomainStateDb.SummaryRecord.forError(domain, "Crawler Interrupted", ex.getMessage());
}
}
finally {
crawlFrontier.addVisited(rootUrl);
@@ -316,7 +384,7 @@ public class CrawlerRetreiver implements AutoCloseable {
);
private Optional<String> guessFeedUrl(CrawlDelayTimer timer) throws InterruptedException {
var oldDomainStateRecord = domainStateDb.get(domain);
var oldDomainStateRecord = domainStateDb.getSummary(domain);
// If we are already aware of an old feed URL, then we can just revalidate it
if (oldDomainStateRecord.isPresent()) {
@@ -341,7 +409,7 @@ public class CrawlerRetreiver implements AutoCloseable {
if (parsedOpt.isEmpty())
return false;
HttpFetchResult result = fetchWithRetry(parsedOpt.get(), timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty());
HttpFetchResult result = fetcher.fetchContent(parsedOpt.get(), warcRecorder, cookies, timer, ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
timer.waitFetchDelay(0);
if (!(result instanceof HttpFetchResult.ResultOk ok)) {
@@ -367,110 +435,63 @@ public class CrawlerRetreiver implements AutoCloseable {
CrawlDelayTimer timer,
DocumentWithReference reference) throws InterruptedException
{
logger.debug("Fetching {}", top);
long startTime = System.currentTimeMillis();
var contentTags = reference.getContentTags();
HttpFetchResult fetchedDoc = fetchWithRetry(top, timer, HttpFetcher.ProbeType.FULL, contentTags);
HttpFetchResult fetchedDoc = fetcher.fetchContent(top, warcRecorder, cookies, timer, contentTags, HttpFetcher.ProbeType.FULL);
timer.waitFetchDelay();
if (Thread.interrupted()) {
Thread.currentThread().interrupt();
throw new InterruptedException();
}
// Parse the document and enqueue links
try {
if (fetchedDoc instanceof HttpFetchResult.ResultOk ok) {
var docOpt = ok.parseDocument();
if (docOpt.isPresent()) {
var doc = docOpt.get();
switch (fetchedDoc) {
case HttpFetchResult.ResultOk ok -> {
var docOpt = ok.parseDocument();
if (docOpt.isPresent()) {
var doc = docOpt.get();
crawlFrontier.enqueueLinksFromDocument(top, doc);
crawlFrontier.addVisited(new EdgeUrl(ok.uri()));
var responseUrl = new EdgeUrl(ok.uri());
crawlFrontier.enqueueLinksFromDocument(responseUrl, doc);
crawlFrontier.addVisited(responseUrl);
}
}
}
else if (fetchedDoc instanceof HttpFetchResult.Result304Raw && reference.doc() != null) {
var doc = reference.doc();
case HttpFetchResult.Result304Raw ref when reference.doc() != null ->
{
var doc = reference.doc();
warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBodyBytes, doc.headers, contentTags);
warcRecorder.writeReferenceCopy(top, cookies, doc.contentType, doc.httpStatus, doc.documentBodyBytes, doc.headers, contentTags);
fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url,
new ContentType(doc.contentType, "UTF-8"),
doc.documentBodyBytes);
fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url,
new ContentType(doc.contentType, "UTF-8"),
doc.documentBodyBytes);
if (doc.documentBodyBytes != null) {
var parsed = doc.parseBody();
if (doc.documentBodyBytes != null) {
var parsed = doc.parseBody();
crawlFrontier.enqueueLinksFromDocument(top, parsed);
crawlFrontier.addVisited(top);
crawlFrontier.enqueueLinksFromDocument(top, parsed);
crawlFrontier.addVisited(top);
}
}
}
else if (fetchedDoc instanceof HttpFetchResult.ResultException) {
errorCount ++;
case HttpFetchResult.ResultRedirect(EdgeUrl location) -> {
if (Objects.equals(location.domain, top.domain)) {
crawlFrontier.addFirst(location);
}
}
case HttpFetchResult.ResultException ex -> errorCount++;
default -> {} // Ignore other types
}
}
catch (Exception ex) {
logger.error("Error parsing document {}", top, ex);
}
timer.waitFetchDelay(System.currentTimeMillis() - startTime);
return fetchedDoc;
}
/** Fetch a document and retry on 429s */
private HttpFetchResult fetchWithRetry(EdgeUrl url,
CrawlDelayTimer timer,
HttpFetcher.ProbeType probeType,
ContentTags contentTags) throws InterruptedException {
long probeStart = System.currentTimeMillis();
if (probeType == HttpFetcher.ProbeType.FULL) {
retryLoop:
for (int i = 0; i <= HTTP_429_RETRY_LIMIT; i++) {
try {
var probeResult = fetcher.probeContentType(url, warcRecorder, contentTags);
switch (probeResult) {
case HttpFetcher.ContentTypeProbeResult.Ok(EdgeUrl resolvedUrl):
url = resolvedUrl; // If we were redirected while probing, use the final URL for fetching
break retryLoop;
case HttpFetcher.ContentTypeProbeResult.BadContentType badContentType:
return new HttpFetchResult.ResultNone();
case HttpFetcher.ContentTypeProbeResult.BadContentType.Timeout timeout:
return new HttpFetchResult.ResultException(timeout.ex());
case HttpFetcher.ContentTypeProbeResult.Exception exception:
return new HttpFetchResult.ResultException(exception.ex());
default: // should be unreachable
throw new IllegalStateException("Unknown probe result");
}
}
catch (HttpFetcherImpl.RateLimitException ex) {
timer.waitRetryDelay(ex);
}
catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
}
timer.waitFetchDelay(System.currentTimeMillis() - probeStart);
}
for (int i = 0; i <= HTTP_429_RETRY_LIMIT; i++) {
try {
return fetcher.fetchContent(url, warcRecorder, contentTags, probeType);
}
catch (HttpFetcherImpl.RateLimitException ex) {
timer.waitRetryDelay(ex);
}
catch (Exception ex) {
logger.warn("Failed to fetch {}", url, ex);
return new HttpFetchResult.ResultException(ex);
}
}
return new HttpFetchResult.ResultNone();
}
private boolean isAllowedProtocol(String proto) {
return proto.equalsIgnoreCase("http")
|| proto.equalsIgnoreCase("https");

View File

@@ -55,6 +55,9 @@ public class DomainCrawlFrontier {
}
}
public EdgeDomain getDomain() {
return thisDomain;
}
/** Increase the depth of the crawl by a factor. If the current depth is smaller
* than the number of already visited documents, the base depth will be adjusted
* to the visited count first.

View File

@@ -2,6 +2,7 @@ package nu.marginalia.crawl.retreival.revisit;
import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDataReference;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
@@ -10,6 +11,8 @@ import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
@@ -18,10 +21,13 @@ import java.io.IOException;
* E-Tag and Last-Modified headers.
*/
public class CrawlerRevisitor {
private final DomainCrawlFrontier crawlFrontier;
private final CrawlerRetreiver crawlerRetreiver;
private final WarcRecorder warcRecorder;
private static final Logger logger = LoggerFactory.getLogger(CrawlerRevisitor.class);
public CrawlerRevisitor(DomainCrawlFrontier crawlFrontier,
CrawlerRetreiver crawlerRetreiver,
WarcRecorder warcRecorder) {
@@ -31,7 +37,8 @@ public class CrawlerRevisitor {
}
/** Performs a re-crawl of old documents, comparing etags and last-modified */
public int recrawl(CrawlDataReference oldCrawlData,
public RecrawlMetadata recrawl(CrawlDataReference oldCrawlData,
DomainCookies cookies,
SimpleRobotRules robotsRules,
CrawlDelayTimer delayTimer)
throws InterruptedException {
@@ -39,6 +46,7 @@ public class CrawlerRevisitor {
int retained = 0;
int errors = 0;
int skipped = 0;
int size = 0;
for (CrawledDocument doc : oldCrawlData) {
if (errors > 20) {
@@ -46,6 +54,10 @@ public class CrawlerRevisitor {
break;
}
if (Thread.interrupted()) {
throw new InterruptedException();
}
var urlMaybe = EdgeUrl.parse(doc.url);
if (urlMaybe.isEmpty())
continue;
@@ -78,6 +90,7 @@ public class CrawlerRevisitor {
continue;
}
size++;
double skipProb;
@@ -121,6 +134,7 @@ public class CrawlerRevisitor {
}
// Add a WARC record so we don't repeat this
warcRecorder.writeReferenceCopy(url,
cookies,
doc.contentType,
doc.httpStatus,
doc.documentBodyBytes,
@@ -145,11 +159,15 @@ public class CrawlerRevisitor {
else if (result instanceof HttpFetchResult.ResultException) {
errors++;
}
recrawled++;
}
}
return recrawled;
logger.info("Recrawl summary {}: {} recrawled, {} retained, {} errors, {} skipped",
crawlFrontier.getDomain(), recrawled, retained, errors, skipped);
return new RecrawlMetadata(size, errors, skipped);
}
public record RecrawlMetadata(int size, int errors, int skipped) {}
}

View File

@@ -6,6 +6,7 @@ import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument;
import javax.annotation.Nullable;
import java.util.Objects;
public record DocumentWithReference(
@Nullable CrawledDocument doc,
@@ -33,8 +34,22 @@ public record DocumentWithReference(
return false;
if (doc == null)
return false;
if (doc.documentBodyBytes.length == 0)
return false;
if (doc.documentBodyBytes.length == 0) {
if (doc.httpStatus < 300) {
return resultOk.bytesLength() == 0;
}
else if (doc.httpStatus == 301 || doc.httpStatus == 302 || doc.httpStatus == 307) {
@Nullable
String docLocation = doc.getHeader("Location");
@Nullable
String resultLocation = resultOk.header("Location");
return Objects.equals(docLocation, resultLocation);
}
else {
return doc.httpStatus == resultOk.statusCode();
}
}
return CrawlDataReference.isContentBodySame(doc.documentBodyBytes, resultOk.bytesRaw());
}

View File

@@ -41,6 +41,8 @@ dependencies {
implementation libs.snakeyaml
implementation libs.zstd
implementation libs.bundles.httpcomponents
testImplementation libs.bundles.slf4j.test
testImplementation libs.bundles.junit
testImplementation libs.mockito

View File

@@ -42,18 +42,20 @@ public interface SerializableCrawlDataStream extends AutoCloseable {
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
if (fileName.endsWith(".slop.zip")) {
try {
return new ParquetSerializableCrawlDataStream(fullPath);
return new SlopSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
if (fileName.endsWith(".slop.zip")) {
else if (fileName.endsWith(".parquet")) {
logger.error("Opening deprecated parquet-style crawl data stream", new Exception());
try {
return new SlopSerializableCrawlDataStream(fullPath);
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();

View File

@@ -1,6 +1,9 @@
package nu.marginalia.model.body;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.model.EdgeUrl;
import org.apache.hc.core5.http.Header;
import org.apache.hc.core5.http.message.BasicHeader;
import org.jetbrains.annotations.Nullable;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
@@ -11,8 +14,9 @@ import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.net.InetAddress;
import java.net.URI;
import java.net.http.HttpHeaders;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
/* FIXME: This interface has a very unfortunate name that is not very descriptive.
@@ -57,7 +61,7 @@ public sealed interface HttpFetchResult {
*/
record ResultOk(URI uri,
int statusCode,
HttpHeaders headers,
Header[] headers,
String ipAddress,
byte[] bytesRaw, // raw data for the entire response including headers
int bytesStart,
@@ -65,7 +69,22 @@ public sealed interface HttpFetchResult {
) implements HttpFetchResult {
public ResultOk(URI uri, int status, MessageHeaders headers, String ipAddress, byte[] bytes, int bytesStart, int length) {
this(uri, status, HttpHeaders.of(headers.map(), (k,v) -> true), ipAddress, bytes, bytesStart, length);
this(uri, status, convertHeaders(headers), ipAddress, bytes, bytesStart, length);
}
private static Header[] convertHeaders(MessageHeaders messageHeaders) {
List<Header> headers = new ArrayList<>(12);
messageHeaders.map().forEach((k, v) -> {
if (k.isBlank()) return;
if (!Character.isAlphabetic(k.charAt(0))) return;
for (var value : v) {
headers.add(new BasicHeader(k, value));
}
});
return headers.toArray(new Header[0]);
}
public boolean isOk() {
@@ -95,7 +114,13 @@ public sealed interface HttpFetchResult {
@Nullable
public String header(String name) {
return headers.firstValue(name).orElse(null);
for (var header : headers) {
if (header.getName().equalsIgnoreCase(name)) {
String headerValue = header.getValue();
return headerValue;
}
}
return null;
}
}
@@ -119,6 +144,12 @@ public sealed interface HttpFetchResult {
}
}
record ResultRedirect(EdgeUrl url) implements HttpFetchResult {
public boolean isOk() {
return true;
}
}
/** Fetching resulted in a HTTP 304, the remote content is identical to
* our reference copy. This will be replaced with a Result304ReplacedWithReference
* at a later stage.

View File

@@ -102,7 +102,7 @@ public final class CrawledDocument implements SerializableCrawlData {
}
@Nullable
private String getHeader(String header) {
public String getHeader(String header) {
if (headers == null) {
return null;
}

View File

@@ -165,12 +165,26 @@ public class CrawledDocumentParquetRecordFileWriter implements AutoCloseable {
contentType = "";
}
boolean hasCookies = false;
String etag = null;
String lastModified = null;
StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
headersStrBuilder.add(header.getKey() + ": " + value);
for (var header : headers) {
if (header.getName().equalsIgnoreCase("X-Has-Cookies")) {
hasCookies = hasCookies || header.getValue().equals("1");
}
else if (header.getName().equalsIgnoreCase("ETag")) {
etag = header.getValue();
}
else if (header.getName().equalsIgnoreCase("Last-Modified")) {
lastModified = header.getValue();
}
else {
headersStrBuilder.add(header.getName() + ": " + header.getValue());
}
}
String headersStr = headersStrBuilder.toString();
@@ -178,14 +192,14 @@ public class CrawledDocumentParquetRecordFileWriter implements AutoCloseable {
domain,
response.target(),
fetchOk.ipAddress(),
headers.firstValue("X-Has-Cookies").orElse("0").equals("1"),
hasCookies,
fetchOk.statusCode(),
response.date(),
contentType,
bodyBytes,
headersStr,
headers.firstValue("ETag").orElse(null),
headers.firstValue("Last-Modified").orElse(null)
etag,
lastModified
));
}

View File

@@ -341,12 +341,15 @@ public record SlopCrawlDataRecord(String domain,
contentType = "";
}
boolean hasCookies = false;
String headersStr;
StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
headersStrBuilder.add(header.getKey() + ": " + value);
for (var header : headers) {
if (header.getName().equalsIgnoreCase("X-Cookies") && "1".equals(header.getValue())) {
hasCookies = true;
}
headersStrBuilder.add(header.getName() + ": " + header.getValue());
}
headersStr = headersStrBuilder.toString();
@@ -355,7 +358,7 @@ public record SlopCrawlDataRecord(String domain,
domain,
response.target(),
fetchOk.ipAddress(),
"1".equals(headers.firstValue("X-Cookies").orElse("0")),
hasCookies,
fetchOk.statusCode(),
response.date().toEpochMilli(),
contentType,

View File

@@ -8,6 +8,7 @@ import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.sql.SQLException;
import java.time.Duration;
import java.time.Instant;
import static org.junit.jupiter.api.Assertions.*;
@@ -47,8 +48,8 @@ class DomainStateDbTest {
db.save(allFields);
db.save(minFields);
assertEquals(allFields, db.get("all.marginalia.nu").orElseThrow());
assertEquals(minFields, db.get("min.marginalia.nu").orElseThrow());
assertEquals(allFields, db.getSummary("all.marginalia.nu").orElseThrow());
assertEquals(minFields, db.getSummary("min.marginalia.nu").orElseThrow());
var updatedAllFields = new DomainStateDb.SummaryRecord(
"all.marginalia.nu",
@@ -59,7 +60,19 @@ class DomainStateDbTest {
);
db.save(updatedAllFields);
assertEquals(updatedAllFields, db.get("all.marginalia.nu").orElseThrow());
assertEquals(updatedAllFields, db.getSummary("all.marginalia.nu").orElseThrow());
}
}
@Test
public void testMetadata() throws SQLException {
try (var db = new DomainStateDb(tempFile)) {
var original = new DomainStateDb.CrawlMeta("example.com", Instant.ofEpochMilli(12345), Duration.ofMillis(30), Duration.ofMillis(300), 1, 2, 3);
db.save(original);
var maybeMeta = db.getMeta("example.com");
assertTrue(maybeMeta.isPresent());
assertEquals(original, maybeMeta.get());
}
}

View File

@@ -0,0 +1,146 @@
package nu.marginalia.crawl.fetcher;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.client.WireMock;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.UserAgent;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeUrl;
import org.junit.jupiter.api.*;
import java.io.IOException;
import java.net.URISyntaxException;
import static org.junit.jupiter.api.Assertions.assertEquals;
@Tag("slow")
class HttpFetcherImplContentTypeProbeTest {
private HttpFetcherImpl fetcher;
private static WireMockServer wireMockServer;
private static EdgeUrl timeoutUrl;
private static EdgeUrl contentTypeHtmlUrl;
private static EdgeUrl contentTypeBinaryUrl;
private static EdgeUrl redirectUrl;
private static EdgeUrl badHttpStatusUrl;
private static EdgeUrl onlyGetAllowedUrl;
@BeforeAll
public static void setupAll() throws URISyntaxException {
wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
timeoutUrl = new EdgeUrl("http://localhost:18089/timeout.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(timeoutUrl.path))
.willReturn(WireMock.aResponse()
.withFixedDelay(15000))); // 10 seconds delay to simulate timeout
contentTypeHtmlUrl = new EdgeUrl("http://localhost:18089/test.html.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(contentTypeHtmlUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)));
contentTypeBinaryUrl = new EdgeUrl("http://localhost:18089/test.bad.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(contentTypeBinaryUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "application/octet-stream")
.withStatus(200)));
redirectUrl = new EdgeUrl("http://localhost:18089/redirect.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(redirectUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Location", "http://localhost:18089/test.html.bin")
.withStatus(301)));
badHttpStatusUrl = new EdgeUrl("http://localhost:18089/badstatus.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(badHttpStatusUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(500)));
onlyGetAllowedUrl = new EdgeUrl("http://localhost:18089/onlyget.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(onlyGetAllowedUrl.path))
.willReturn(WireMock.aResponse()
.withStatus(405))); // Method Not Allowed
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(onlyGetAllowedUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)));
wireMockServer.start();
}
@AfterAll
public static void tearDownAll() {
wireMockServer.stop();
}
@BeforeEach
public void setUp() {
fetcher = new HttpFetcherImpl(new UserAgent("test.marginalia.nu", "test.marginalia.nu"));
}
@AfterEach
public void tearDown() throws IOException {
var stats = fetcher.getPoolStats();
assertEquals(0, stats.getLeased());
assertEquals(0, stats.getPending());
fetcher.close();
}
@Test
public void testProbeContentTypeHtmlShortcircuitPath() throws URISyntaxException {
var result = fetcher.probeContentType(new EdgeUrl("https://localhost/test.html"), new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertInstanceOf(HttpFetcher.ContentTypeProbeResult.NoOp.class, result);
}
@Test
public void testProbeContentTypeHtmlShortcircuitTags() {
var result = fetcher.probeContentType(contentTypeBinaryUrl, new DomainCookies(), new CrawlDelayTimer(50), new ContentTags("a", "b"));
Assertions.assertInstanceOf(HttpFetcher.ContentTypeProbeResult.NoOp.class, result);
}
@Test
public void testProbeContentTypeHtml() {
var result = fetcher.probeContentType(contentTypeHtmlUrl, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.Ok(contentTypeHtmlUrl), result);
}
@Test
public void testProbeContentTypeBinary() {
var result = fetcher.probeContentType(contentTypeBinaryUrl, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.BadContentType("application/octet-stream", 200), result);
}
@Test
public void testProbeContentTypeRedirect() {
var result = fetcher.probeContentType(redirectUrl, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.Redirect(contentTypeHtmlUrl), result);
}
@Test
public void testProbeContentTypeBadHttpStatus() {
var result = fetcher.probeContentType(badHttpStatusUrl, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.HttpError(500, "Bad status code"), result);
}
@Test
public void testOnlyGetAllowed() {
var result = fetcher.probeContentType(onlyGetAllowedUrl, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertEquals(new HttpFetcher.ContentTypeProbeResult.Ok(onlyGetAllowedUrl), result);
}
@Test
public void testTimeout() {
var result = fetcher.probeContentType(timeoutUrl, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
Assertions.assertInstanceOf(HttpFetcher.ContentTypeProbeResult.Timeout.class, result);
}
}

View File

@@ -0,0 +1,95 @@
package nu.marginalia.crawl.fetcher;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.client.WireMock;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.UserAgent;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.junit.jupiter.api.*;
import java.io.IOException;
import java.net.URISyntaxException;
import static org.junit.jupiter.api.Assertions.assertEquals;
@Tag("slow")
class HttpFetcherImplDomainProbeTest {
private HttpFetcherImpl fetcher;
private static WireMockServer wireMockServer;
private static EdgeUrl timeoutUrl;
@BeforeAll
public static void setupAll() throws URISyntaxException {
wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo("/timeout"))
.willReturn(WireMock.aResponse()
.withFixedDelay(15000))); // 10 seconds delay to simulate timeout
wireMockServer.start();
timeoutUrl = new EdgeUrl("http://localhost:18089/timeout");
}
@AfterAll
public static void tearDownAll() {
wireMockServer.stop();
}
@BeforeEach
public void setUp() {
fetcher = new HttpFetcherImpl(new UserAgent("test.marginalia.nu", "test.marginalia.nu"));
}
@AfterEach
public void tearDown() throws IOException {
var stats = fetcher.getPoolStats();
assertEquals(0, stats.getLeased());
assertEquals(0, stats.getPending());
fetcher.close();
}
@Test
public void testProbeDomain() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("https://www.marginalia.nu/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Ok(new EdgeUrl("https://www.marginalia.nu/")), result);
}
@Test
public void testProbeDomainProtoUpgrade() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("http://www.marginalia.nu/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Ok(new EdgeUrl("https://www.marginalia.nu/")), result);
}
@Test
public void testProbeDomainRedirect() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("http://search.marginalia.nu/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Redirect(new EdgeDomain("marginalia-search.com")), result);
}
@Test
public void testProbeDomainOnlyGET() throws URISyntaxException {
// This test is to check if the domain probe only allows GET requests
var result = fetcher.probeDomain(new EdgeUrl("https://marginalia-search.com/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Ok(new EdgeUrl("https://marginalia-search.com/")), result);
}
@Test
public void testProbeDomainError() throws URISyntaxException {
var result = fetcher.probeDomain(new EdgeUrl("https://invalid.example.com/"));
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Error during domain probe"), result);
}
@Test
public void testProbeDomainTimeout() throws URISyntaxException {
var result = fetcher.probeDomain(timeoutUrl);
Assertions.assertEquals(new HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Timeout during domain probe"), result);
}
}

View File

@@ -0,0 +1,381 @@
package nu.marginalia.crawl.fetcher;
import com.github.tomakehurst.wiremock.WireMockServer;
import com.github.tomakehurst.wiremock.client.WireMock;
import com.github.tomakehurst.wiremock.core.WireMockConfiguration;
import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
import org.junit.jupiter.api.*;
import org.netpreserve.jwarc.*;
import java.io.IOException;
import java.net.URISyntaxException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import static org.junit.jupiter.api.Assertions.assertEquals;
@Tag("slow")
class HttpFetcherImplFetchTest {
private HttpFetcherImpl fetcher;
private static WireMockServer wireMockServer;
private static String etag = "etag";
private static String lastModified = "Wed, 21 Oct 2024 07:28:00 GMT";
private static EdgeUrl okUrl;
private static EdgeUrl okUrlSetsCookie;
private static EdgeUrl okRangeResponseUrl;
private static EdgeUrl okUrlWith304;
private static EdgeUrl timeoutUrl;
private static EdgeUrl redirectUrl;
private static EdgeUrl badHttpStatusUrl;
private static EdgeUrl keepAliveUrl;
@BeforeAll
public static void setupAll() throws URISyntaxException {
wireMockServer =
new WireMockServer(WireMockConfiguration.wireMockConfig()
.port(18089));
timeoutUrl = new EdgeUrl("http://localhost:18089/timeout.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(timeoutUrl.path))
.willReturn(WireMock.aResponse()
.withFixedDelay(15000)
)); // 15 seconds delay to simulate timeout
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(timeoutUrl.path))
.willReturn(WireMock.aResponse()
.withFixedDelay(15000)
.withBody("Hello World")
)); // 15 seconds delay to simulate timeout
redirectUrl = new EdgeUrl("http://localhost:18089/redirect.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(redirectUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Location", "http://localhost:18089/test.html.bin")
.withStatus(301)));
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(redirectUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Location", "http://localhost:18089/test.html.bin")
.withStatus(301)));
badHttpStatusUrl = new EdgeUrl("http://localhost:18089/badstatus");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(badHttpStatusUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(500)));
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(badHttpStatusUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(500)));
okUrl = new EdgeUrl("http://localhost:18089/ok.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(okUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)));
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(okUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)
.withBody("Hello World")));
okUrlSetsCookie = new EdgeUrl("http://localhost:18089/okSetCookie.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(okUrlSetsCookie.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withHeader("Set-Cookie", "test=1")
.withStatus(200)));
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(okUrlSetsCookie.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withHeader("Set-Cookie", "test=1")
.withStatus(200)
.withBody("Hello World")));
okUrlWith304 = new EdgeUrl("http://localhost:18089/ok304.bin");
wireMockServer.stubFor(WireMock.head(WireMock.urlEqualTo(okUrlWith304.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withHeader("ETag", etag)
.withHeader("Last-Modified", lastModified)
.withStatus(304)));
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(okUrlWith304.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withHeader("ETag", etag)
.withHeader("Last-Modified", lastModified)
.withStatus(304)));
okRangeResponseUrl = new EdgeUrl("http://localhost:18089/okRangeResponse.bin");
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(okRangeResponseUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Range", "bytes 0-100/200")
.withBody("Hello World")
.withStatus(206)));
keepAliveUrl = new EdgeUrl("http://localhost:18089/keepalive.bin");
wireMockServer.stubFor(WireMock.get(WireMock.urlEqualTo(keepAliveUrl.path))
.willReturn(WireMock.aResponse()
.withHeader("Content-Type", "text/html")
.withStatus(200)
.withHeader("Keep-Alive", "max=4, timeout=30")
.withBody("Hello")
));
wireMockServer.start();
}
@AfterAll
public static void tearDownAll() {
wireMockServer.stop();
}
WarcRecorder warcRecorder;
Path warcFile;
@BeforeEach
public void setUp() throws IOException {
fetcher = new HttpFetcherImpl(new UserAgent("test.marginalia.nu", "test.marginalia.nu"));
warcFile = Files.createTempFile(getClass().getSimpleName(), ".warc");
warcRecorder = new WarcRecorder(warcFile);
}
@AfterEach
public void tearDown() throws IOException {
var stats = fetcher.getPoolStats();
assertEquals(0, stats.getLeased());
assertEquals(0, stats.getPending());
System.out.println(stats);
fetcher.close();
warcRecorder.close();
Files.deleteIfExists(warcFile);
}
@Test
public void testFoo() {
fetcher.fetchSitemapUrls("https://www.marginalia.nu/sitemap.xml", new CrawlDelayTimer(100));
}
@Test
public void testOk_NoProbe() throws IOException {
var result = fetcher.fetchContent(okUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertTrue(result.isOk());
List<WarcRecord> warcRecords = getWarcRecords();
assertEquals(2, warcRecords.size());
Assertions.assertInstanceOf(WarcRequest.class, warcRecords.get(0));
Assertions.assertInstanceOf(WarcResponse.class, warcRecords.get(1));
WarcResponse response = (WarcResponse) warcRecords.get(1);
assertEquals("0", response.http().headers().first("X-Has-Cookies").orElse("0"));
}
@Test
public void testOkSetsCookie() throws IOException {
var cookies = new DomainCookies();
var result = fetcher.fetchContent(okUrlSetsCookie, warcRecorder, cookies, new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertTrue(result.isOk());
List<WarcRecord> warcRecords = getWarcRecords();
assertEquals(2, warcRecords.size());
Assertions.assertInstanceOf(WarcRequest.class, warcRecords.get(0));
Assertions.assertInstanceOf(WarcResponse.class, warcRecords.get(1));
WarcResponse response = (WarcResponse) warcRecords.get(1);
assertEquals("1", response.http().headers().first("X-Has-Cookies").orElse("0"));
}
@Test
public void testOk_FullProbe() {
var result = fetcher.fetchContent(okUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.FULL);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertTrue(result.isOk());
}
@Test
public void testOk304_NoProbe() {
var result = fetcher.fetchContent(okUrlWith304, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), new ContentTags(etag, lastModified), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.Result304Raw.class, result);
System.out.println(result);
}
@Test
public void testOk304_FullProbe() {
var result = fetcher.fetchContent(okUrlWith304, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), new ContentTags(etag, lastModified), HttpFetcher.ProbeType.FULL);
Assertions.assertInstanceOf(HttpFetchResult.Result304Raw.class, result);
System.out.println(result);
}
@Test
public void testBadStatus_NoProbe() throws IOException {
var result = fetcher.fetchContent(badHttpStatusUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertFalse(result.isOk());
List<WarcRecord> warcRecords = getWarcRecords();
assertEquals(2, warcRecords.size());
Assertions.assertInstanceOf(WarcRequest.class, warcRecords.get(0));
Assertions.assertInstanceOf(WarcResponse.class, warcRecords.get(1));
}
@Test
public void testBadStatus_FullProbe() {
var result = fetcher.fetchContent(badHttpStatusUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.FULL);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertFalse(result.isOk());
System.out.println(result);
}
@Test
public void testRedirect_NoProbe() throws URISyntaxException, IOException {
var result = fetcher.fetchContent(redirectUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultRedirect.class, result);
assertEquals(new EdgeUrl("http://localhost:18089/test.html.bin"), ((HttpFetchResult.ResultRedirect) result).url());
List<WarcRecord> warcRecords = getWarcRecords();
assertEquals(2, warcRecords.size());
Assertions.assertInstanceOf(WarcRequest.class, warcRecords.get(0));
Assertions.assertInstanceOf(WarcResponse.class, warcRecords.get(1));
}
@Test
public void testRedirect_FullProbe() throws URISyntaxException {
var result = fetcher.fetchContent(redirectUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.FULL);
Assertions.assertInstanceOf(HttpFetchResult.ResultRedirect.class, result);
assertEquals(new EdgeUrl("http://localhost:18089/test.html.bin"), ((HttpFetchResult.ResultRedirect) result).url());
System.out.println(result);
}
@Test
public void testFetchTimeout_NoProbe() throws IOException, URISyntaxException {
Instant requestStart = Instant.now();
var result = fetcher.fetchContent(timeoutUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultException.class, result);
Instant requestEnd = Instant.now();
System.out.println(result);
// Verify that we are actually timing out, and not blocking on the request until it finishes (which would be a bug),
// the request will take 15 seconds to complete, so we should be able to timeout before that, something like 10 seconds and change;
// but we'll verify that it is less than 15 seconds to make the test less fragile.
Assertions.assertTrue(requestEnd.isBefore(requestStart.plusSeconds(15)), "Request should have taken less than 15 seconds");
var records = getWarcRecords();
Assertions.assertEquals(1, records.size());
Assertions.assertInstanceOf(WarcXEntityRefused.class, records.getFirst());
WarcXEntityRefused entity = (WarcXEntityRefused) records.getFirst();
assertEquals(WarcXEntityRefused.documentProbeTimeout, entity.profile());
assertEquals(timeoutUrl.asURI(), entity.targetURI());
}
@Test
public void testRangeResponse() throws IOException {
var result = fetcher.fetchContent(okRangeResponseUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertTrue(result.isOk());
List<WarcRecord> warcRecords = getWarcRecords();
assertEquals(2, warcRecords.size());
Assertions.assertInstanceOf(WarcRequest.class, warcRecords.get(0));
Assertions.assertInstanceOf(WarcResponse.class, warcRecords.get(1));
var response = (WarcResponse) warcRecords.get(1);
assertEquals("length", response.headers().first("WARC-Truncated").orElse(""));
}
@Test
public void testFetchTimeout_Probe() throws IOException, URISyntaxException {
Instant requestStart = Instant.now();
var result = fetcher.fetchContent(timeoutUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.FULL);
Instant requestEnd = Instant.now();
Assertions.assertInstanceOf(HttpFetchResult.ResultException.class, result);
// Verify that we are actually timing out, and not blocking on the request until it finishes (which would be a bug),
// the request will take 15 seconds to complete, so we should be able to timeout before that, something like 10 seconds and change;
// but we'll verify that it is less than 15 seconds to make the test less fragile.
Assertions.assertTrue(requestEnd.isBefore(requestStart.plusSeconds(15)), "Request should have taken less than 15 seconds");
var records = getWarcRecords();
Assertions.assertEquals(1, records.size());
Assertions.assertInstanceOf(WarcXEntityRefused.class, records.getFirst());
WarcXEntityRefused entity = (WarcXEntityRefused) records.getFirst();
assertEquals(WarcXEntityRefused.documentProbeTimeout, entity.profile());
assertEquals(timeoutUrl.asURI(), entity.targetURI());
}
@Test
public void testKeepaliveUrl() {
// mostly for smoke testing and debugger utility
var result = fetcher.fetchContent(keepAliveUrl, warcRecorder, new DomainCookies(), new CrawlDelayTimer(1000), ContentTags.empty(), HttpFetcher.ProbeType.DISABLED);
Assertions.assertInstanceOf(HttpFetchResult.ResultOk.class, result);
Assertions.assertTrue(result.isOk());
}
private List<WarcRecord> getWarcRecords() throws IOException {
List<WarcRecord> records = new ArrayList<>();
System.out.println(Files.readString(warcFile));
try (var reader = new WarcReader(warcFile)) {
WarcXResponseReference.register(reader);
WarcXEntityRefused.register(reader);
for (var record : reader) {
// Load the body, we need to do this before we close the reader to have access to the content.
if (record instanceof WarcRequest req) {
req.http();
} else if (record instanceof WarcResponse rsp) {
rsp.http();
}
records.add(record);
}
}
return records;
}
}

View File

@@ -1,9 +1,12 @@
package nu.marginalia.crawl.retreival;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@@ -13,8 +16,6 @@ import org.netpreserve.jwarc.WarcResponse;
import java.io.IOException;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.nio.file.Files;
import java.nio.file.Path;
import java.security.NoSuchAlgorithmException;
@@ -30,8 +31,7 @@ class CrawlerWarcResynchronizerTest {
HttpClient httpClient;
@BeforeEach
public void setUp() throws Exception {
httpClient = HttpClient.newBuilder()
.build();
httpClient = HttpClients.createDefault();
fileName = Files.createTempFile("test", ".warc.gz");
outputFile = Files.createTempFile("test", ".warc.gz");
@@ -45,7 +45,7 @@ class CrawlerWarcResynchronizerTest {
@Test
void run() throws IOException, URISyntaxException {
try (var oldRecorder = new WarcRecorder(fileName, new Cookies())) {
try (var oldRecorder = new WarcRecorder(fileName)) {
fetchUrl(oldRecorder, "https://www.marginalia.nu/");
fetchUrl(oldRecorder, "https://www.marginalia.nu/log/");
fetchUrl(oldRecorder, "https://www.marginalia.nu/feed/");
@@ -55,7 +55,7 @@ class CrawlerWarcResynchronizerTest {
var crawlFrontier = new DomainCrawlFrontier(new EdgeDomain("www.marginalia.nu"), List.of(), 100);
try (var newRecorder = new WarcRecorder(outputFile, new Cookies())) {
try (var newRecorder = new WarcRecorder(outputFile)) {
new CrawlerWarcResynchronizer(crawlFrontier, newRecorder).run(fileName);
}
@@ -78,11 +78,10 @@ class CrawlerWarcResynchronizerTest {
}
void fetchUrl(WarcRecorder recorder, String url) throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
var req = HttpRequest.newBuilder()
.uri(new java.net.URI(url))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build();
recorder.fetch(httpClient, req);
HttpGet request = new HttpGet(url);
request.addHeader("User-agent", "test.marginalia.nu");
request.addHeader("Accept-Encoding", "gzip");
recorder.fetch(httpClient, new DomainCookies(), request);
}
}

View File

@@ -2,10 +2,10 @@ package nu.marginalia.crawl.retreival.fetcher;
import com.sun.net.httpserver.HttpServer;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeUrl;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
@@ -32,7 +32,6 @@ class ContentTypeProberTest {
static EdgeUrl timeoutEndpoint;
static Path warcFile;
static WarcRecorder recorder;
@BeforeEach
void setUp() throws IOException {
@@ -80,21 +79,17 @@ class ContentTypeProberTest {
htmlRedirEndpoint = EdgeUrl.parse("http://localhost:" + port + "/redir.gz").get();
fetcher = new HttpFetcherImpl("test");
recorder = new WarcRecorder(warcFile, new Cookies());
}
@AfterEach
void tearDown() throws IOException {
server.stop(0);
fetcher.close();
recorder.close();
Files.deleteIfExists(warcFile);
}
@Test
void probeContentTypeOk() throws Exception {
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(htmlEndpoint, recorder, ContentTags.empty());
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(htmlEndpoint, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
System.out.println(result);
@@ -103,16 +98,16 @@ class ContentTypeProberTest {
@Test
void probeContentTypeRedir() throws Exception {
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(htmlRedirEndpoint, recorder, ContentTags.empty());
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(htmlRedirEndpoint, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
System.out.println(result);
assertEquals(result, new HttpFetcher.ContentTypeProbeResult.Ok(htmlEndpoint));
assertEquals(result, new HttpFetcher.ContentTypeProbeResult.Redirect(htmlEndpoint));
}
@Test
void probeContentTypeBad() throws Exception {
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(binaryEndpoint, recorder, ContentTags.empty());
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(binaryEndpoint, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
System.out.println(result);
@@ -121,7 +116,7 @@ class ContentTypeProberTest {
@Test
void probeContentTypeTimeout() throws Exception {
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(timeoutEndpoint, recorder, ContentTags.empty());
HttpFetcher.ContentTypeProbeResult result = fetcher.probeContentType(timeoutEndpoint, new DomainCookies(), new CrawlDelayTimer(50), ContentTags.empty());
System.out.println(result);

View File

@@ -0,0 +1,162 @@
package nu.marginalia.crawl.retreival.fetcher;
import com.sun.net.httpserver.HttpServer;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.junit.jupiter.api.*;
import org.netpreserve.jwarc.WarcReader;
import org.netpreserve.jwarc.WarcRequest;
import org.netpreserve.jwarc.WarcResponse;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Duration;
import java.time.Instant;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
@Tag("slow")
class WarcRecorderFakeServerTest {
static HttpServer server;
@BeforeAll
public static void setUpAll() throws IOException {
server = HttpServer.create(new InetSocketAddress("127.0.0.1", 14510), 10);
// This endpoint will finish sending the response immediately
server.createContext("/fast", exchange -> {
exchange.getResponseHeaders().add("Content-Type", "text/html");
exchange.sendResponseHeaders(200, "<html><body>hello</body></html>".length());
try (var os = exchange.getResponseBody()) {
os.write("<html><body>hello</body></html>".getBytes());
os.flush();
}
exchange.close();
});
// This endpoint will take 10 seconds to finish sending the response,
// which should trigger a timeout in the client
server.createContext("/slow", exchange -> {
exchange.getResponseHeaders().add("Content-Type", "text/html");
exchange.sendResponseHeaders(200, "<html><body>hello</body></html>:D".length());
try (var os = exchange.getResponseBody()) {
os.write("<html><body>hello</body></html>".getBytes());
os.flush();
try {
TimeUnit.SECONDS.sleep(2);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
os.write(":".getBytes());
os.flush();
try {
TimeUnit.SECONDS.sleep(2);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
os.write("D".getBytes());
os.flush();
}
exchange.close();
});
server.start();
}
@AfterAll
public static void tearDownAll() {
server.stop(0);
}
Path fileNameWarc;
Path fileNameParquet;
WarcRecorder client;
HttpClient httpClient;
@BeforeEach
public void setUp() throws Exception {
httpClient = HttpClients.createDefault();
fileNameWarc = Files.createTempFile("test", ".warc");
fileNameParquet = Files.createTempFile("test", ".parquet");
client = new WarcRecorder(fileNameWarc);
}
@AfterEach
public void tearDown() throws Exception {
client.close();
Files.delete(fileNameWarc);
}
@Test
public void fetchFast() throws Exception {
HttpGet request = new HttpGet("http://localhost:14510/fast");
request.addHeader("User-agent", "test.marginalia.nu");
request.addHeader("Accept-Encoding", "gzip");
client.fetch(httpClient, new DomainCookies(), request);
Map<String, String> sampleData = new HashMap<>();
try (var warcReader = new WarcReader(fileNameWarc)) {
warcReader.forEach(record -> {
if (record instanceof WarcRequest req) {
sampleData.put(record.type(), req.target());
}
if (record instanceof WarcResponse rsp) {
sampleData.put(record.type(), rsp.target());
}
});
}
System.out.println(sampleData);
}
@Test
public void fetchSlow() throws Exception {
Instant start = Instant.now();
HttpGet request = new HttpGet("http://localhost:14510/slow");
request.addHeader("User-agent", "test.marginalia.nu");
request.addHeader("Accept-Encoding", "gzip");
client.fetch(httpClient,
new DomainCookies(),
request,
Duration.ofSeconds(1)
);
Instant end = Instant.now();
Map<String, String> sampleData = new HashMap<>();
try (var warcReader = new WarcReader(fileNameWarc)) {
warcReader.forEach(record -> {
if (record instanceof WarcRequest req) {
sampleData.put(record.type(), req.target());
}
if (record instanceof WarcResponse rsp) {
sampleData.put(record.type(), rsp.target());
System.out.println(rsp.target());
}
});
}
System.out.println(
Files.readString(fileNameWarc));
System.out.println(sampleData);
// Timeout is set to 1 second, but the server will take 5 seconds to respond,
// so we expect the request to take 1s and change before it times out.
Assertions.assertTrue(Duration.between(start, end).toMillis() < 3000);
}
}

View File

@@ -2,11 +2,14 @@ package nu.marginalia.crawl.retreival.fetcher;
import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileReader;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@@ -17,8 +20,6 @@ import org.netpreserve.jwarc.WarcXResponseReference;
import java.io.IOException;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.nio.file.Files;
import java.nio.file.Path;
import java.security.NoSuchAlgorithmException;
@@ -35,12 +36,12 @@ class WarcRecorderTest {
HttpClient httpClient;
@BeforeEach
public void setUp() throws Exception {
httpClient = HttpClient.newBuilder().build();
httpClient = HttpClients.createDefault();
fileNameWarc = Files.createTempFile("test", ".warc");
fileNameParquet = Files.createTempFile("test", ".parquet");
client = new WarcRecorder(fileNameWarc, new Cookies());
client = new WarcRecorder(fileNameWarc);
}
@AfterEach
@@ -51,13 +52,12 @@ class WarcRecorderTest {
@Test
void fetch() throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
client.fetch(httpClient,
HttpRequest.newBuilder()
.uri(new java.net.URI("https://www.marginalia.nu/"))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build()
);
HttpGet request = new HttpGet("https://www.marginalia.nu/");
request.addHeader("User-agent", "test.marginalia.nu");
request.addHeader("Accept-Encoding", "gzip");
client.fetch(httpClient, new DomainCookies(), request);
Map<String, String> sampleData = new HashMap<>();
try (var warcReader = new WarcReader(fileNameWarc)) {
@@ -78,8 +78,9 @@ class WarcRecorderTest {
@Test
public void flagAsSkipped() throws IOException, URISyntaxException {
try (var recorder = new WarcRecorder(fileNameWarc, new Cookies())) {
try (var recorder = new WarcRecorder(fileNameWarc)) {
recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"),
new DomainCookies(),
"text/html",
200,
"<?doctype html><html><body>test</body></html>".getBytes(),
@@ -102,8 +103,9 @@ class WarcRecorderTest {
@Test
public void flagAsSkippedNullBody() throws IOException, URISyntaxException {
try (var recorder = new WarcRecorder(fileNameWarc, new Cookies())) {
try (var recorder = new WarcRecorder(fileNameWarc)) {
recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"),
new DomainCookies(),
"text/html",
200,
null,
@@ -114,8 +116,9 @@ class WarcRecorderTest {
@Test
public void testSaveImport() throws URISyntaxException, IOException {
try (var recorder = new WarcRecorder(fileNameWarc, new Cookies())) {
try (var recorder = new WarcRecorder(fileNameWarc)) {
recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"),
new DomainCookies(),
"text/html",
200,
"<?doctype html><html><body>test</body></html>".getBytes(),
@@ -138,23 +141,23 @@ class WarcRecorderTest {
@Test
public void testConvertToParquet() throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
client.fetch(httpClient, HttpRequest.newBuilder()
.uri(new java.net.URI("https://www.marginalia.nu/"))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build());
HttpGet request1 = new HttpGet("https://www.marginalia.nu/");
request1.addHeader("User-agent", "test.marginalia.nu");
request1.addHeader("Accept-Encoding", "gzip");
client.fetch(httpClient, HttpRequest.newBuilder()
.uri(new java.net.URI("https://www.marginalia.nu/log/"))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build());
client.fetch(httpClient, new DomainCookies(), request1);
client.fetch(httpClient, HttpRequest.newBuilder()
.uri(new java.net.URI("https://www.marginalia.nu/sanic.png"))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build());
HttpGet request2 = new HttpGet("https://www.marginalia.nu/log/");
request2.addHeader("User-agent", "test.marginalia.nu");
request2.addHeader("Accept-Encoding", "gzip");
client.fetch(httpClient, new DomainCookies(), request2);
HttpGet request3 = new HttpGet("https://www.marginalia.nu/sanic.png");
request3.addHeader("User-agent", "test.marginalia.nu");
request3.addHeader("Accept-Encoding", "gzip");
client.fetch(httpClient, new DomainCookies(), request3);
CrawledDocumentParquetRecordFileWriter.convertWarc(
"www.marginalia.nu",

View File

@@ -1,6 +1,7 @@
package nu.marginalia.crawling;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.DomainCookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -31,7 +32,7 @@ class HttpFetcherTest {
void fetchUTF8() throws Exception {
var fetcher = new HttpFetcherImpl("nu.marginalia.edge-crawler");
try (var recorder = new WarcRecorder()) {
var result = fetcher.fetchContent(new EdgeUrl("https://www.marginalia.nu"), recorder, ContentTags.empty(), HttpFetcher.ProbeType.FULL);
var result = fetcher.fetchContent(new EdgeUrl("https://www.marginalia.nu"), recorder, new DomainCookies(), new CrawlDelayTimer(100), ContentTags.empty(), HttpFetcher.ProbeType.FULL);
if (DocumentBodyExtractor.asString(result) instanceof DocumentBodyResult.Ok bodyOk) {
System.out.println(bodyOk.contentType());
}
@@ -49,7 +50,7 @@ class HttpFetcherTest {
var fetcher = new HttpFetcherImpl("nu.marginalia.edge-crawler");
try (var recorder = new WarcRecorder()) {
var result = fetcher.fetchContent(new EdgeUrl("https://www.marginalia.nu/robots.txt"), recorder, ContentTags.empty(), HttpFetcher.ProbeType.FULL);
var result = fetcher.fetchContent(new EdgeUrl("https://www.marginalia.nu/robots.txt"), recorder, new DomainCookies(), new CrawlDelayTimer(100), ContentTags.empty(), HttpFetcher.ProbeType.FULL);
if (DocumentBodyExtractor.asString(result) instanceof DocumentBodyResult.Ok bodyOk) {
System.out.println(bodyOk.contentType());
}

View File

@@ -15,6 +15,9 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawlerDocumentStatus;
import nu.marginalia.model.crawldata.SerializableCrawlData;
import nu.marginalia.test.CommonTestData;
import org.apache.hc.client5.http.cookie.BasicCookieStore;
import org.apache.hc.client5.http.cookie.CookieStore;
import org.apache.hc.core5.http.Header;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@@ -24,7 +27,6 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException;
import java.net.http.HttpHeaders;
import java.nio.file.Files;
import java.nio.file.Path;
import java.sql.SQLException;
@@ -120,7 +122,7 @@ public class CrawlerMockFetcherTest {
public void setAllowAllContentTypes(boolean allowAllContentTypes) {}
@Override
public Cookies getCookies() { return new Cookies();}
public CookieStore getCookies() { return new BasicCookieStore();}
@Override
public void clearCookies() {}
@@ -132,13 +134,7 @@ public class CrawlerMockFetcherTest {
}
@Override
public ContentTypeProbeResult probeContentType(EdgeUrl url, WarcRecorder recorder, ContentTags tags) {
logger.info("Probing {}", url);
return new HttpFetcher.ContentTypeProbeResult.Ok(url);
}
@Override
public HttpFetchResult fetchContent(EdgeUrl url, WarcRecorder recorder, ContentTags tags, ProbeType probeType) {
public HttpFetchResult fetchContent(EdgeUrl url, WarcRecorder recorder, DomainCookies cookies, CrawlDelayTimer timer, ContentTags tags, ProbeType probeType) {
logger.info("Fetching {}", url);
if (mockData.containsKey(url)) {
byte[] bodyBytes = mockData.get(url).documentBodyBytes;
@@ -147,7 +143,7 @@ public class CrawlerMockFetcherTest {
return new HttpFetchResult.ResultOk(
url.asURI(),
200,
HttpHeaders.of(Map.of(), (k,v)->true),
new Header[0],
"127.0.0.1",
bodyBytes,
0,

View File

@@ -5,7 +5,6 @@ import nu.marginalia.WmsaHome;
import nu.marginalia.atags.model.DomainLinks;
import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -16,7 +15,7 @@ import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawledDomain;
import nu.marginalia.model.crawldata.SerializableCrawlData;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import nu.marginalia.slop.SlopCrawlDataRecord;
import org.jetbrains.annotations.NotNull;
import org.junit.jupiter.api.*;
import org.netpreserve.jwarc.*;
@@ -37,16 +36,16 @@ class CrawlerRetreiverTest {
private HttpFetcher httpFetcher;
Path tempFileWarc1;
Path tempFileParquet1;
Path tempFileSlop1;
Path tempFileWarc2;
Path tempFileParquet2;
Path tempFileSlop2;
Path tempFileWarc3;
Path tempFileDb;
@BeforeEach
public void setUp() throws IOException {
httpFetcher = new HttpFetcherImpl("search.marginalia.nu; testing a bit :D");
tempFileParquet1 = Files.createTempFile("crawling-process", ".parquet");
tempFileParquet2 = Files.createTempFile("crawling-process", ".parquet");
tempFileSlop1 = Files.createTempFile("crawling-process", ".slop.zip");
tempFileSlop2 = Files.createTempFile("crawling-process", ".slop.zip");
tempFileDb = Files.createTempFile("crawling-process", ".db");
}
@@ -62,14 +61,14 @@ class CrawlerRetreiverTest {
if (tempFileWarc1 != null) {
Files.deleteIfExists(tempFileWarc1);
}
if (tempFileParquet1 != null) {
Files.deleteIfExists(tempFileParquet1);
if (tempFileSlop1 != null) {
Files.deleteIfExists(tempFileSlop1);
}
if (tempFileWarc2 != null) {
Files.deleteIfExists(tempFileWarc2);
}
if (tempFileParquet2 != null) {
Files.deleteIfExists(tempFileParquet2);
if (tempFileSlop2 != null) {
Files.deleteIfExists(tempFileSlop2);
}
if (tempFileWarc3 != null) {
Files.deleteIfExists(tempFileWarc3);
@@ -180,7 +179,7 @@ class CrawlerRetreiverTest {
new EdgeDomain("www.marginalia.nu"),
List.of(), 100);
var resync = new CrawlerWarcResynchronizer(revisitCrawlFrontier,
new WarcRecorder(tempFileWarc2, new Cookies())
new WarcRecorder(tempFileWarc2)
);
// truncate the size of the file to simulate a crash
@@ -224,9 +223,9 @@ class CrawlerRetreiverTest {
doCrawl(tempFileWarc1, specs);
convertToParquet(tempFileWarc1, tempFileParquet1);
convertToSlop(tempFileWarc1, tempFileSlop1);
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileSlop1)) {
while (stream.hasNext()) {
if (stream.next() instanceof CrawledDocument doc) {
data.add(doc);
@@ -277,9 +276,9 @@ class CrawlerRetreiverTest {
assertFalse(frontier.isVisited(new EdgeUrl("https://www.marginalia.nu/log/06-optimization/")));
assertTrue(frontier.isKnown(new EdgeUrl("https://www.marginalia.nu/log/06-optimization/")));
convertToParquet(tempFileWarc1, tempFileParquet1);
convertToSlop(tempFileWarc1, tempFileSlop1);
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileSlop1)) {
while (stream.hasNext()) {
if (stream.next() instanceof CrawledDocument doc) {
data.add(doc);
@@ -293,7 +292,7 @@ class CrawlerRetreiverTest {
// redirects to https://www.marginalia.nu/log/06-optimization.gmi/ (note the trailing slash)
//
// Ensure that the redirect is followed, and that the trailing slash is added
// to the url as reported in the parquet file.
// to the url as reported in the Slop file.
var fetchedUrls =
data.stream()
@@ -326,9 +325,9 @@ class CrawlerRetreiverTest {
tempFileWarc1 = Files.createTempFile("crawling-process", ".warc");
doCrawl(tempFileWarc1, specs);
convertToParquet(tempFileWarc1, tempFileParquet1);
convertToSlop(tempFileWarc1, tempFileSlop1);
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileSlop1)) {
while (stream.hasNext()) {
if (stream.next() instanceof CrawledDocument doc) {
data.add(doc);
@@ -373,11 +372,11 @@ class CrawlerRetreiverTest {
tempFileWarc2 = Files.createTempFile("crawling-process", ".warc.gz");
doCrawl(tempFileWarc1, specs);
convertToParquet(tempFileWarc1, tempFileParquet1);
convertToSlop(tempFileWarc1, tempFileSlop1);
doCrawlWithReferenceStream(specs,
new CrawlDataReference(tempFileParquet1)
new CrawlDataReference(tempFileSlop1)
);
convertToParquet(tempFileWarc2, tempFileParquet2);
convertToSlop(tempFileWarc2, tempFileSlop2);
try (var reader = new WarcReader(tempFileWarc2)) {
WarcXResponseReference.register(reader);
@@ -396,7 +395,7 @@ class CrawlerRetreiverTest {
});
}
try (var ds = SerializableCrawlDataStream.openDataStream(tempFileParquet2)) {
try (var ds = SerializableCrawlDataStream.openDataStream(tempFileSlop2)) {
while (ds.hasNext()) {
var doc = ds.next();
if (doc instanceof CrawledDomain dr) {
@@ -411,9 +410,9 @@ class CrawlerRetreiverTest {
}
}
private void convertToParquet(Path tempFileWarc2, Path tempFileParquet2) {
CrawledDocumentParquetRecordFileWriter.convertWarc("www.marginalia.nu",
new UserAgent("test", "test"), tempFileWarc2, tempFileParquet2);
private void convertToSlop(Path tempFileWarc2, Path tempFileSlop2) throws IOException {
SlopCrawlDataRecord.convertWarc("www.marginalia.nu",
new UserAgent("test", "test"), tempFileWarc2, tempFileSlop2);
}
@@ -436,9 +435,9 @@ class CrawlerRetreiverTest {
doCrawl(tempFileWarc1, specs);
convertToParquet(tempFileWarc1, tempFileParquet1);
convertToSlop(tempFileWarc1, tempFileSlop1);
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
try (var stream = SerializableCrawlDataStream.openDataStream(tempFileSlop1)) {
while (stream.hasNext()) {
var doc = stream.next();
data.computeIfAbsent(doc.getClass(), c -> new ArrayList<>()).add(doc);
@@ -449,14 +448,14 @@ class CrawlerRetreiverTest {
System.out.println("---");
doCrawlWithReferenceStream(specs, new CrawlDataReference(tempFileParquet1));
doCrawlWithReferenceStream(specs, new CrawlDataReference(tempFileSlop1));
var revisitCrawlFrontier = new DomainCrawlFrontier(
new EdgeDomain("www.marginalia.nu"),
List.of(), 100);
var resync = new CrawlerWarcResynchronizer(revisitCrawlFrontier,
new WarcRecorder(tempFileWarc3, new Cookies())
new WarcRecorder(tempFileWarc3)
);
// truncate the size of the file to simulate a crash
@@ -465,7 +464,7 @@ class CrawlerRetreiverTest {
resync.run(tempFileWarc2);
assertTrue(revisitCrawlFrontier.addKnown(new EdgeUrl("https://www.marginalia.nu/")));
convertToParquet(tempFileWarc3, tempFileParquet2);
convertToSlop(tempFileWarc3, tempFileSlop2);
try (var reader = new WarcReader(tempFileWarc3)) {
@@ -485,7 +484,7 @@ class CrawlerRetreiverTest {
});
}
try (var ds = SerializableCrawlDataStream.openDataStream(tempFileParquet2)) {
try (var ds = SerializableCrawlDataStream.openDataStream(tempFileSlop2)) {
while (ds.hasNext()) {
var doc = ds.next();
if (doc instanceof CrawledDomain dr) {
@@ -507,7 +506,7 @@ class CrawlerRetreiverTest {
}
private void doCrawlWithReferenceStream(CrawlerMain.CrawlSpecRecord specs, CrawlDataReference reference) {
try (var recorder = new WarcRecorder(tempFileWarc2, new Cookies());
try (var recorder = new WarcRecorder(tempFileWarc2);
var db = new DomainStateDb(tempFileDb)
) {
new CrawlerRetreiver(httpFetcher, new DomainProber(d -> true), specs, db, recorder).crawlDomain(new DomainLinks(), reference);
@@ -519,7 +518,7 @@ class CrawlerRetreiverTest {
@NotNull
private DomainCrawlFrontier doCrawl(Path tempFileWarc1, CrawlerMain.CrawlSpecRecord specs) {
try (var recorder = new WarcRecorder(tempFileWarc1, new Cookies());
try (var recorder = new WarcRecorder(tempFileWarc1);
var db = new DomainStateDb(tempFileDb)
) {
var crawler = new CrawlerRetreiver(httpFetcher, new DomainProber(d -> true), specs, db, recorder);

View File

@@ -7,8 +7,7 @@ import java.util.Arrays;
public enum SearchJsParameter {
DEFAULT("default"),
DENY_JS("no-js", "js:true"),
REQUIRE_JS("yes-js", "js:false");
DENY_JS("no-js", "special:scripts");
public final String value;
public final String[] implictExcludeSearchTerms;
@@ -20,7 +19,6 @@ public enum SearchJsParameter {
public static SearchJsParameter parse(@Nullable String value) {
if (DENY_JS.value.equals(value)) return DENY_JS;
if (REQUIRE_JS.value.equals(value)) return REQUIRE_JS;
return DEFAULT;
}

View File

@@ -41,6 +41,7 @@ dependencies {
implementation project(':code:functions:live-capture:api')
implementation project(':code:functions:math:api')
implementation project(':code:functions:favicon:api')
implementation project(':code:functions:domain-info:api')
implementation project(':code:functions:search-query:api')

View File

@@ -3,8 +3,14 @@ package nu.marginalia.search;
import com.google.inject.Inject;
import io.jooby.Context;
import io.jooby.Jooby;
import io.jooby.MediaType;
import io.jooby.StatusCode;
import io.prometheus.client.Counter;
import io.prometheus.client.Histogram;
import nu.marginalia.WebsiteUrl;
import nu.marginalia.api.favicon.FaviconClient;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.search.svc.*;
import nu.marginalia.service.discovery.property.ServicePartition;
import nu.marginalia.service.server.BaseServiceParams;
@@ -13,10 +19,14 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
import java.util.NoSuchElementException;
public class SearchService extends JoobyService {
private final WebsiteUrl websiteUrl;
private final SearchSiteSubscriptionService siteSubscriptionService;
private final FaviconClient faviconClient;
private final DbDomainQueries domainQueries;
private static final Logger logger = LoggerFactory.getLogger(SearchService.class);
private static final Histogram wmsa_search_service_request_time = Histogram.build()
@@ -33,12 +43,15 @@ public class SearchService extends JoobyService {
@Inject
public SearchService(BaseServiceParams params,
WebsiteUrl websiteUrl,
SearchFrontPageService frontPageService,
SearchAddToCrawlQueueService addToCrawlQueueService,
SearchSiteSubscriptionService siteSubscriptionService,
SearchSiteInfoService siteInfoService,
SearchCrosstalkService crosstalkService,
SearchBrowseService searchBrowseService,
FaviconClient faviconClient,
DbDomainQueries domainQueries,
SearchQueryService searchQueryService)
throws Exception {
super(params,
@@ -51,8 +64,11 @@ public class SearchService extends JoobyService {
new SearchAddToCrawlQueueService_(addToCrawlQueueService),
new SearchBrowseService_(searchBrowseService)
));
this.websiteUrl = websiteUrl;
this.siteSubscriptionService = siteSubscriptionService;
this.faviconClient = faviconClient;
this.domainQueries = domainQueries;
}
@Override
@@ -62,6 +78,36 @@ public class SearchService extends JoobyService {
final String startTimeAttribute = "start-time";
jooby.get("/export-opml", siteSubscriptionService::exportOpml);
jooby.get("/site/https://*", this::handleSiteUrlRedirect);
jooby.get("/site/http://*", this::handleSiteUrlRedirect);
String emptySvg = "<svg xmlns=\"http://www.w3.org/2000/svg\"></svg>";
jooby.get("/site/{domain}/favicon", ctx -> {
String domain = ctx.path("domain").value();
logger.info("Finding icon for domain {}", domain);
try {
DbDomainQueries.DomainIdWithNode domainIdWithNode = domainQueries.getDomainIdWithNode(new EdgeDomain(domain));
var faviconMaybe = faviconClient.getFavicon(domain, domainIdWithNode.nodeAffinity());
if (faviconMaybe.isEmpty()) {
ctx.setResponseType(MediaType.valueOf("image/svg+xml"));
return emptySvg;
} else {
var favicon = faviconMaybe.get();
ctx.responseStream(MediaType.valueOf(favicon.contentType()), consumer -> {
consumer.write(favicon.bytes());
});
}
}
catch (NoSuchElementException ex) {
ctx.setResponseType(MediaType.valueOf("image/svg+xml"));
return emptySvg;
}
return "";
});
jooby.before((Context ctx) -> {
ctx.setAttribute(startTimeAttribute, System.nanoTime());
});
@@ -80,5 +126,19 @@ public class SearchService extends JoobyService {
});
}
/** Redirect handler for the case when the user passes
* an url like /site/https://example.com/, in this
* scenario we want to extract the domain name and redirect
* to /site/example.com/
*/
private Context handleSiteUrlRedirect(Context ctx) {
var pv = ctx.path("*").value();
int trailSlash = pv.indexOf('/');
if (trailSlash > 0) {
pv = pv.substring(0, trailSlash);
}
ctx.sendRedirect(StatusCode.TEMPORARY_REDIRECT, websiteUrl.withPath("site/" + pv));
return ctx;
}
}

View File

@@ -7,9 +7,7 @@ import java.util.Arrays;
public enum SearchJsParameter {
DEFAULT("default"),
DENY_JS("no-js", "js:true"),
REQUIRE_JS("yes-js", "js:false");
DENY_JS("no-js", "special:scripts");
public final String value;
public final String[] implictExcludeSearchTerms;
@@ -20,7 +18,6 @@ public enum SearchJsParameter {
public static SearchJsParameter parse(@Nullable String value) {
if (DENY_JS.value.equals(value)) return DENY_JS;
if (REQUIRE_JS.value.equals(value)) return REQUIRE_JS;
return DEFAULT;
}

View File

@@ -86,8 +86,10 @@ public record SearchParameters(WebsiteUrl url,
public String renderUrl() {
StringBuilder pathBuilder = new StringBuilder("/search?");
pathBuilder.append("query=").append(URLEncoder.encode(query, StandardCharsets.UTF_8));
if (query != null) {
pathBuilder.append("query=").append(URLEncoder.encode(query, StandardCharsets.UTF_8));
}
if (profile != SearchProfile.NO_FILTER) {
pathBuilder.append("&profile=").append(URLEncoder.encode(profile.filterId, StandardCharsets.UTF_8));
}

View File

@@ -67,6 +67,10 @@ public class DecoratedSearchResults {
return focusDomainId >= 0;
}
public boolean isEmpty() {
return results.isEmpty();
}
public SearchFilters getFilters() {
return filters;
}

View File

@@ -81,6 +81,7 @@ public class SearchFilters {
),
List.of(
new Filter("Vintage", "fa-clock-rotate-left", SearchProfile.VINTAGE, parameters),
new Filter("Small Web", "fa-minus", SearchProfile.SMALLWEB, parameters),
new Filter("Plain Text", "fa-file", SearchProfile.PLAIN_TEXT, parameters),
new Filter("Tilde", "fa-house", SearchProfile.TILDE, parameters)
),

View File

@@ -56,7 +56,9 @@ public class SearchQueryService {
}
catch (Exception ex) {
logger.error("Error", ex);
return errorPageService.serveError(SearchParameters.defaultsForQuery(websiteUrl, query, page));
return errorPageService.serveError(
SearchParameters.defaultsForQuery(websiteUrl, query, Objects.requireNonNullElse(page, 1))
);
}
}

View File

@@ -10,7 +10,7 @@
@template.part.head(title = "Marginalia Search - Explore")
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans ">
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans ">
@template.part.navbar(navbar = navbar)
@@ -23,7 +23,7 @@
</header>
<div class="max-w-[1400px] mx-auto flex flex-col gap-1 place-items-center">
<div class="border dark:border-gray-600 bg-white dark:bg-gray-800 dark:text-gray-100 my-4 p-3 rounded overflow-hidden flex flex-col space-y-4">
<div class="border border-gray-300 dark:border-gray-600 bg-white dark:bg-gray-800 dark:text-gray-100 my-4 p-3 rounded overflow-hidden flex flex-col space-y-4">
@if (results.hasFocusDomain())
<div class="flex space-x-1">
<span>Showing websites similar to <a class="font-mono text-liteblue dark:text-blue-200" href="/site/${results.focusDomain()}"><i class="fas fa-globe"></i> <span class="underline">${results.focusDomain()}</span></a></span>
@@ -36,7 +36,7 @@
</div>
<div class="grid-cols-1 gap-4 sm:grid sm:grid-cols-1 md:grid-cols-3 xl:grid-cols-4 mx-auto sm:p-4">
@for (BrowseResult result : results.results())
<div class="bg-white border dark:border-gray-600 dark:bg-gray-800 rounded overflow-hidden">
<div class="bg-white border border-gray-300 dark:border-gray-600 dark:bg-gray-800 rounded overflow-hidden">
<div class="bg-margeblue text-white p-2 flex space-x-4 text-sm">
<span class="break-words">${result.displayDomain()}</span>
<div class="grow"></div>

View File

@@ -9,7 +9,7 @@
<span>
Access logs containing IP-addresses are retained for up to 24 hours,
anonymized logs with source addresses removed are sometimes kept longer
for to help diagnosing bugs.
to help diagnose bugs.
</span>
</div>
<div class="flex space-y-4 flex-col">

View File

@@ -9,6 +9,15 @@
nicotine: '#f8f8ee',
margeblue: '#3e5f6f',
liteblue: '#0066cc',
bgblue: '#e5e9eb',
},
screens: {
'coarsepointer': {
'raw': '(pointer: coarse)'
},
'finepointer': {
'raw': '(pointer: fine)'
},
}
},
screens: {

View File

@@ -15,7 +15,7 @@
@template.part.head(title = "Marginalia Search - " + parameters.query())
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)
<div>

View File

@@ -9,7 +9,7 @@
@template.part.head(title = "Marginalia Search - Error")
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)

View File

@@ -11,7 +11,7 @@
@template.part.head(title = "Marginalia Search - " + results.getQuery())
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)
<div>
@@ -23,7 +23,7 @@
@template.serp.part.searchform(query = results.getParams().query(), profile = results.getProfile(), filters = results.getFilters())
</div>
<div class="grow"></div>
<button class="fixed bottom-10 right-5 sm:hidden text-sm bg-margeblue text-white p-4 rounded-xl active:text-slate-200" id="filter-button">
<button class="fixed bottom-10 right-5 finepointer:hidden md:hidden text-sm bg-margeblue text-white p-4 rounded-xl active:text-slate-200" id="filter-button">
<i class="fas fa-filter mr-3"></i>
Filters
</button>
@@ -44,6 +44,11 @@
<div class="grow"></div>
<a href="${results.getParams().renderUrlWithoutSiteFocus()}" class="fa fa-remove"></a>
</div>
@elseif (results.isEmpty())
<div class="border dark:border-gray-600 rounded flex space-x-4 bg-white dark:bg-gray-800 text-gray-600 dark:text-gray-100 text-sm p-4 items-center">
No search results found. Try different search terms, or spelling variations. The search engine currently
only supports queries in the English language.
</div>
@endif
<div class="space-y-4 sm:space-y-6">

View File

@@ -26,15 +26,15 @@
It operates a bit like a clock, starting at the top and working its way around clockwise.</p>
<div class="flex gap-4 place-items-middle">
@template.serp.part.matchogram(mask = 90)
@template.serp.part.matchogram(mask = 90, domain = "example.com")
<div>This is by the beginning</div>
</div>
<div class="flex gap-4 place-items-middle">
@template.serp.part.matchogram(mask = 90L<<26)
@template.serp.part.matchogram(mask = 90L<<26, domain = "example.com")
<div>This is in the middle</div>
</div>
<div class="flex gap-4 place-items-middle">
@template.serp.part.matchogram(mask = 5L<<48)
@template.serp.part.matchogram(mask = 5L<<48, domain = "example.com")
<div>This is toward the end</div>
</div>

View File

@@ -1,11 +1,13 @@
@import java.util.stream.IntStream
@param long mask
@param String domain
<svg width="40" height="40">
<svg width="40" height="40"
style="background-image: url('/site/${domain}/favicon'); background-repeat: no-repeat; background-size: 16px 16px; background-position: center; ">
<circle
cx="18"
cy="18"
cx="20"
cy="20"
r="16"
fill="none"
stroke="#eee"
@@ -13,10 +15,10 @@
/>
@for (int bit : IntStream.range(0, 56).filter(bit -> (mask & (1L << bit)) != 0).toArray())
<line
x1="${18 + 15*Math.sin(2 * Math.PI * bit / 56.)}"
y1="${18 - 15*Math.cos(2 * Math.PI * bit / 56.)}"
x2="${18 + 17*Math.sin(2 * Math.PI * bit / 56.)}"
y2="${18 - 17*Math.cos(2 * Math.PI * bit / 56.)}"
x1="${20 + 15*Math.sin(2 * Math.PI * bit / 56.)}"
y1="${20 - 15*Math.cos(2 * Math.PI * bit / 56.)}"
x2="${20 + 17*Math.sin(2 * Math.PI * bit / 56.)}"
y2="${20 - 17*Math.cos(2 * Math.PI * bit / 56.)}"
stroke="#444"
stroke-width="2"
/>

View File

@@ -12,7 +12,7 @@
<div class="flex flex-col grow" >
<div class="flex flex-row space-x-2 place-items-center">
<div class="flex-0" title="Match density">
@template.serp.part.matchogram(mask = result.first.positionsMask)
@template.serp.part.matchogram(mask = result.first.positionsMask, domain=result.getFirst().url.domain.toString())
</div>
<div class="flex grow justify-between items-start">
<div class="flex-1">

View File

@@ -3,7 +3,7 @@
@param SearchFilters filters
<aside class="md:w-64 py-4 shrink-0 hidden sm:block">
<aside class="md:w-64 py-4 shrink-0 hidden md:block finepointer:block">
<div class="space-y-6 sticky top-4">
<div class="bg-white dark:bg-gray-800 p-4 border dark:border-gray-600 border-gray-300">
<h2 class="font-medium mb-3 flex items-center font-serif hidden md:block">
@@ -13,7 +13,7 @@
@for (List<SearchFilters.Filter> filterGroup : filters.getFilterGroups())
@for (SearchFilters.Filter filter : filterGroup)
<label class="flex items-center">
<button title="${filter.displayName}" onclick="document.location='$unsafe{filter.url}'" class="flex-1 py-2 pl-2 rounded flex space-x-2 dark:has-[:checked]:bg-gray-950 has-[:checked]:bg-gray-100 has-[:checked]:text-slate-900 dark:has-[:checked]:text-slate-100 hover:bg-gray-50 dark:hover:bg-gray-950 bg-white dark:bg-gray-900 dark:border dark:border-gray-600 text-margeblue dark:text-slate-200 outline-1 active:outline">
<button title="${filter.displayName}" onclick="document.location='$unsafe{filter.url}'" class="flex-1 py-2 pl-2 rounded flex space-x-2 dark:has-[:checked]:bg-gray-950 has-[:checked]:bg-gray-300 has-[:checked]:text-slate-900 dark:has-[:checked]:text-slate-100 hover:bg-gray-50 dark:hover:bg-gray-950 bg-white dark:bg-gray-900 dark:border dark:border-gray-600 text-margeblue dark:text-slate-200 outline-1 active:outline">
@if (filter.current)
<input type="checkbox" checked class="sr-only" aria-checked="true" />
@else
@@ -38,7 +38,7 @@
<div class="space-y-2">
@for (SearchFilters.SearchOption option : filters.searchOptions())
<label class="flex items-center">
<button title="${option.name()}" onclick="document.location='$unsafe{option.getUrl()}'" class="flex-1 py-2 pl-2 rounded flex space-x-2 dark:has-[:checked]:bg-gray-950 has-[:checked]:bg-gray-100 has-[:checked]:text-slate-900 dark:has-[:checked]:text-slate-100 hover:bg-gray-50 dark:hover:bg-gray-950 bg-white dark:bg-gray-900 dark:border dark:border-gray-600 text-margeblue dark:text-slate-200 outline-1 active:outline">
<button title="${option.name()}" onclick="document.location='$unsafe{option.getUrl()}'" class="flex-1 py-2 pl-2 rounded flex space-x-2 dark:has-[:checked]:bg-gray-950 has-[:checked]:bg-gray-300 has-[:checked]:text-slate-900 dark:has-[:checked]:text-slate-100 hover:bg-gray-50 dark:hover:bg-gray-950 bg-white dark:bg-gray-900 dark:border dark:border-gray-600 text-margeblue dark:text-slate-200 outline-1 active:outline">
@if (option.isSet())
<input type="checkbox" checked class="sr-only" aria-checked="true" />
@else

View File

@@ -15,7 +15,7 @@
@template.part.head(title = "Marginalia Search", allowIndexing = true)
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)
@@ -32,18 +32,14 @@
@if (model.news().isEmpty())
<div class="max-w-7xl mx-auto flex flex-col space-y-4 fill-w">
<div class="border dark:border-gray-600 dark:bg-gray-800 bg-white rounded p-2 m-4 ">
<div class="border border-gray-300 border-gray-100 dark:border-gray-600 dark:bg-gray-800 bg-white rounded p-2 m-4 ">
<div class="text-slate-700 dark:text-white text-sm p-4">
<div class="fas fa-gift mr-1 text-margeblue dark:text-slate-200"></div>
This is the new design and home of Marginalia Search.
You can read about what this entails <a href="https://about.marginalia-search.com/article/redesign/" class="underline text-liteblue dark:text-blue-200">here</a>.
<p class="my-4"></p>
The old version of Marginalia Search remains available at
<a href="https://old-search.marginalia.nu/" class="underline text-liteblue dark:text-blue-200">https://old-search.marginalia.nu/</a>.
The old version of Marginalia Search remains available
<a href="https://old-search.marginalia.nu/" class="underline text-liteblue dark:text-blue-200">here</a>.
</div>
</div>
<div class="mx-auto flex flex-col sm:flex-row my-4 sm:space-x-2 space-y-2 sm:space-y-0 w-full md:w-auto px-2">
<div class="flex flex-col border dark:border-gray-600 rounded overflow-hidden dark:bg-gray-800 bg-white p-6 space-y-3">
<div class="flex flex-col border border-gray-300 dark:border-gray-600 rounded overflow-hidden dark:bg-gray-800 bg-white p-6 space-y-3">
<div><i class="fas fa-sailboat mx-2 text-margeblue dark:text-slate-200"></i>Explore the Web</div>
<ul class="list-disc ml-6 text-slate-700 dark:text-white text-xs leading-5">
<li>Prioritizes non-commercial content</li>
@@ -52,7 +48,7 @@
</ul>
</div>
<div class="flex flex-col border dark:border-gray-600 rounded overflow-hidden dark:bg-gray-800 bg-white p-6 space-y-3 ">
<div class="flex flex-col border border-gray-300 dark:border-gray-600 rounded overflow-hidden dark:bg-gray-800 bg-white p-6 space-y-3 ">
<div><i class="fas fa-hand-holding-hand mx-2 text-margeblue dark:text-slate-200"></i>Open Source</div>
<ul class="list-disc ml-6 text-slate-700 dark:text-white text-xs leading-5">
<li>Custom index and crawler software</li>
@@ -65,7 +61,7 @@
</div>
</div>
<div class="flex flex-col border dark:border-gray-600 rounded overflow-hidden dark:bg-gray-800 bg-white p-6 space-y-3 ">
<div class="flex flex-col border border-gray-300 dark:border-gray-600 rounded overflow-hidden dark:bg-gray-800 bg-white p-6 space-y-3 ">
<div><i class="fas fa-lock mx-2 text-margeblue dark:text-slate-200"></i> Privacy by default</div>
<ul class="list-disc ml-6 text-slate-700 dark:text-white text-xs leading-5">
<li>Filter out tracking and adtech</li>

View File

@@ -13,7 +13,7 @@
@template.part.head(title = "Marginalia Search - " + parameters.query())
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)
<div>

View File

@@ -11,7 +11,7 @@
@template.part.head(title = "Marginalia Search - " + model.domainA() + "/" + model.domainB())
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)

View File

@@ -9,7 +9,7 @@
@template.part.head(title = "Marginalia Search - " + model.domain())
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)

View File

@@ -7,7 +7,7 @@
@if (!list.isEmpty())
<div class="bg-white dark:bg-gray-800 shadow-sm rounded overflow-hidden border dark:border-gray-600">
<div class="bg-white dark:bg-gray-800 shadow-sm rounded overflow-hidden border border-gray-300 dark:border-gray-600">
<div class="px-4 py-2 bg-margeblue text-white border-b border-gray-200 dark:border-gray-600 flex place-items-baseline">
<h2 class="text-md">${title}</h2>
<div class="grow"></div>

View File

@@ -9,11 +9,11 @@
@template.part.head(title = "Marginalia Search - Site Viewer")
<body class="min-h-screen bg-slate-100 dark:bg-gray-900 dark:text-white font-sans " >
<body class="min-h-screen bg-bgblue dark:bg-gray-900 dark:text-white font-sans " >
@template.part.navbar(navbar = navbar)
<header class="border-b border-gray-300 dark:border-gray-600 bg-white dark:bg-gray-800 shadow-md">
<header class="border-b border-gray-300 border-gray-300 dark:border-gray-600 bg-white dark:bg-gray-800 shadow-md">
<div class="max-w-[1400px] mx-auto px-4 py-4">
<h1 class="text-base md:text-xl mr-2 md:mr-8 font-serif">View Site Information</h1>
</div>
@@ -22,7 +22,7 @@
<div class="max-w-[1000px] mx-auto flex gap-4 flex-col md:flex-row place-items-center md:place-items-start p-4">
<div class="border dark:border-gray-600 rounded md:my-4 overflow-hidden bg-white dark:bg-gray-800 flex flex-col space-y-2 flex-1">
<div class="border border-gray-300 dark:border-gray-600 rounded md:my-4 overflow-hidden bg-white dark:bg-gray-800 flex flex-col space-y-2 flex-1">
<div class="bg-margeblue text-white p-2 text-sm mb-2">View Site Information</div>
<p class="mx-4">This utility lets you explore what the search engine knows about the web,
@@ -45,7 +45,7 @@
</div>
@if (!model.domains().isEmpty())
<div class="border dark:border-gray-600 rounded md:my-4 overflow-hidden w-full md:w-auto">
<div class="border border-gray-300 dark:border-gray-600 rounded md:my-4 overflow-hidden w-full md:w-auto">
<div class="bg-margeblue text-white p-2 text-sm">Recently Discovered Domains</div>

View File

@@ -8,17 +8,17 @@
<div class="flex flex-col space-y-4 my-4 w-full">
@if (backlinks.results().isEmpty())
<div class="border dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm ">
<div class="border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm ">
The search engine isn't aware of any backlinks to ${backlinks.domain()}!
</div>
@else
<div class="border dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm">
<div class="border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm">
Showing documents linking to ${backlinks.domain()}
</div>
@endif
@for (GroupedUrlDetails group : backlinks.results())
<div class="border dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden mx-4">
<div class="border dark:border-gray-600 border-gray-300 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden mx-4">
<div class="flex space-x-2 flex-row place-items-baseline bg-margeblue text-white p-2 text-md">
<span class="fas fa-globe"></span>
<a href="/site/${group.domain().toString()}">${group.domain().toString()}</a>

View File

@@ -9,17 +9,17 @@
<div class="flex flex-col space-y-4 my-4">
@if (docs.results().isEmpty())
<div class="border dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm">
<div class="border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm">
The search engine doesn't index any documents from ${docs.domain()}
</div>
@else
<div class="border dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm">
<div class="border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden p-4 mx-4 text-gray-800 text-sm">
Showing documents from ${docs.domain()}
</div>
@endif
@for (UrlDetails details : docs.results())
<div class="border dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden mx-4">
<div class="border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-800 dark:text-white flex flex-col overflow-hidden mx-4">
<div class="flex grow justify-between items-start p-4">
<div class="flex-1">
<h2 class="text-xl text-gray-800 dark:text-white font-serif mr-4">

View File

@@ -8,9 +8,9 @@
<!-- Main content -->
<div class="flex-1 p-4 space-y-4 mx-auto w-full md:w-auto">
<div class="flex border dark:border-gray-600 rounded bg-white dark:bg-gray-800 flex-col space-y-4 pb-4 overflow-hidden md:max-w-lg" >
<div class="flex place-items-baseline space-x-2 p-2 text-md border-b dark:border-gray-600 bg-margeblue text-white">
<i class="fa fa-globe"></i>
<div class="flex border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-800 flex-col space-y-4 pb-4 overflow-hidden md:max-w-lg" >
<div class="flex place-items-center space-x-2 p-2 text-md border-b dark:border-gray-600 bg-margeblue text-white">
<img src="/site/${siteInfo.domain()}/favicon" style="width: 16px; height: 16px; vertical-align: center">
<span>${siteInfo.domain()}</span>
<div class="grow">
</div>

View File

@@ -4,7 +4,7 @@
@param ReportDomain reportDomain
<div class="flex-col mx-auto">
<div class="max-w-2xl mx-auto bg-white dark:bg-gray-800 border dark:border-gray-600 rounded overflow-auto shadow-sm my-4 space-y-4 w-full">
<div class="max-w-2xl mx-auto bg-white dark:bg-gray-800 border border-gray-300 dark:border-gray-600 rounded overflow-auto shadow-sm my-4 space-y-4 w-full">
<div class="px-4 py-2 bg-margeblue text-white border-b border-gray-200 dark:border-gray-800">
<h2 class="text-md">Report Domain Issue</h2>
</div>

View File

@@ -9,6 +9,15 @@ module.exports = {
nicotine: '#f8f8ee',
margeblue: '#3e5f6f',
liteblue: '#0066cc',
bgblue: '#e5e9eb',
},
screens: {
'coarsepointer': {
'raw': '(pointer: coarse)'
},
'finepointer': {
'raw': '(pointer: fine)'
},
}
},
screens: {

View File

@@ -23,7 +23,12 @@ apply from: "$rootProject.projectDir/srcsets.gradle"
apply from: "$rootProject.projectDir/docker.gradle"
dependencies {
implementation project(':third-party:symspell')
implementation project(':code:common:db')
implementation project(':code:common:model')
implementation project(':code:common:service')
implementation project(':code:common:config')
implementation project(':code:functions:live-capture')
implementation project(':code:functions:live-capture:api')
@@ -32,20 +37,16 @@ dependencies {
implementation project(':code:functions:domain-info')
implementation project(':code:functions:domain-info:api')
implementation project(':code:common:config')
implementation project(':code:common:service')
implementation project(':code:common:model')
implementation project(':code:common:db')
implementation project(':code:features-search:screenshots')
implementation project(':code:libraries:geo-ip')
implementation project(':code:libraries:language-processing')
implementation project(':code:libraries:term-frequency-dict')
implementation libs.bundles.slf4j
implementation project(':third-party:symspell')
implementation libs.bundles.slf4j
implementation libs.prometheus
implementation libs.commons.io
implementation libs.guava
libs.bundles.grpc.get().each {
implementation dependencies.create(it) {
@@ -59,9 +60,7 @@ dependencies {
implementation dependencies.create(libs.guice.get()) {
exclude group: 'com.google.guava'
}
implementation dependencies.create(libs.spark.get()) {
exclude group: 'org.eclipse.jetty'
}
implementation libs.bundles.jooby
implementation libs.bundles.jetty
implementation libs.opencsv
implementation libs.trove

Some files were not shown because too many files have changed in this diff Show More