1
1
mirror of https://github.com/MarginaliaSearch/MarginaliaSearch.git synced 2025-10-06 17:32:39 +02:00

Compare commits

...

86 Commits

Author SHA1 Message Date
Viktor Lofgren
a2b076f9be (converter) Add progress tracking for big domains in converter 2025-01-26 18:03:59 +01:00
Viktor Lofgren
c8b0a32c0f (crawler) Reduce long retention of CrawlDataReference objects and their associated SerializableCrawlDataStreams 2025-01-26 15:40:17 +01:00
Viktor Lofgren
f0d74aa3bb (converter) Fix close() ordering to prevent converter crash 2025-01-26 14:47:36 +01:00
Viktor Lofgren
74a1f100f4 (converter) Refactor to remove CrawledDomainReader and move its functionality into SerializableCrawlDataStream 2025-01-26 14:46:50 +01:00
Viktor Lofgren
eb049658e4 (converter) Add truncation att the parser step to prevent the converter from spending too much time on excessively large documents
Refactor to do this without introducing additional copies
2025-01-26 14:28:53 +01:00
Viktor Lofgren
db138b2a6f (converter) Add truncation att the parser step to prevent the converter from spending too much time on exessively large documents 2025-01-26 14:25:57 +01:00
Viktor Lofgren
1673fc284c (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:21:46 +01:00
Viktor Lofgren
503ea57d5b (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:18:14 +01:00
Viktor Lofgren
18ca926c7f (converter) Truncate excessively long strings in SentenceExtractor, malformed data was effectively DOS:ing the converter 2025-01-26 12:52:54 +01:00
Viktor Lofgren
db99242db2 (converter) Adding some logging around the simple processing track to investigate an issue with the converter stalling 2025-01-26 12:02:00 +01:00
Viktor Lofgren
2b9d2985ba (doc) Update readme with up-to-date install instructions. 2025-01-24 18:51:41 +01:00
Viktor Lofgren
eeb6ecd711 (search) Make it clearer that the affiliate marker applies to the result, and not the search engine's relation to the result. 2025-01-24 18:50:00 +01:00
Viktor Lofgren
1f58aeadbf (build) Upgrade JIB 2025-01-24 18:49:28 +01:00
Viktor Lofgren
3d68be64da (crawler) Add default CT when it's missing for icons 2025-01-22 13:55:47 +01:00
Viktor Lofgren
668f3b16ef (search) Redirect ^/site/$ to /site 2025-01-22 13:35:18 +01:00
Viktor Lofgren
98a340a0d1 (crawler) Add favicon data to domain state db in its own table 2025-01-22 11:41:20 +01:00
Viktor Lofgren
8862100f7e (crawler) Improve logging and error handling 2025-01-21 21:44:21 +01:00
Viktor Lofgren
274941f6de (crawler) Smarter parquet->slop crawl data migration 2025-01-21 21:26:12 +01:00
Viktor Lofgren
abec83582d Fix refactoring gore 2025-01-21 15:08:04 +01:00
Viktor Lofgren
569520c9b6 (index) Add manual adjustments for rankings based on domain 2025-01-21 15:07:43 +01:00
Viktor Lofgren
088310e998 (converter) Improve simple processing performance
There was a regression introduced in the recent slop migration changes in  the performance of the simple conversion track.  This reverts the issue.
2025-01-21 14:13:33 +01:00
Viktor
270cab874b Merge pull request #134 from MarginaliaSearch/slop-crawl-data-spike
Store crawl data in slop instead of parquet
2025-01-21 13:34:22 +01:00
Viktor Lofgren
4c74e280d3 (crawler) Fix urlencoding in sitemap fetcher 2025-01-21 13:33:35 +01:00
Viktor Lofgren
5b347e17ac (crawler) Automatically migrate to slop from parquet when crawling 2025-01-21 13:33:14 +01:00
Viktor Lofgren
55d6ab933f Merge branch 'master' into slop-crawl-data-spike 2025-01-21 13:32:58 +01:00
Viktor Lofgren
43b74e9706 (crawler) Fix exception handler and resource leak in WarcRecorder 2025-01-20 23:45:28 +01:00
Viktor Lofgren
579a115243 (crawler) Reduce log spam from error handling in new sitemap fetcher 2025-01-20 23:17:13 +01:00
Viktor
2c67f50a43 Merge pull request #150 from MarginaliaSearch/httpclient-in-crawler
Reduce the use of 3rd party code in the crawler
2025-01-20 19:35:30 +01:00
Viktor Lofgren
78a958e2b0 (crawler) Fix broken test that started failing after the search engine moved to a new domain 2025-01-20 18:52:14 +01:00
Viktor Lofgren
4e939389b2 (crawler) New Jsoup based sitemap parser 2025-01-20 14:37:44 +01:00
Viktor Lofgren
e67a9bdb91 (crawler) Migrate away from using OkHttp in the crawler, use Java's HttpClient instead. 2025-01-19 15:07:11 +01:00
Viktor Lofgren
567e4e1237 (crawler) Fast detection and bail-out for crawler traps
Improve logging and exclude robots.txt from this logic.
2025-01-18 15:28:54 +01:00
Viktor Lofgren
4342e42722 (crawler) Fast detection and bail-out for crawler traps
Nephentes has been doing the rounds in social media, adding an easy detection and mitigation mechanism for this type of trap, as sadly not all webmasters set up their robots.txt correctly.  Out of the box crawl limits will also deal with this type of attack, but this fix is faster.
2025-01-17 13:02:57 +01:00
Viktor Lofgren
bc818056e6 (run) Fix templates for mariadb
Apparently the docker image contract changed at some point, and now we should spawn mariadbd and not mysqld; mariadb-admin and not mysqladmin.
2025-01-16 15:27:02 +01:00
Viktor Lofgren
de2feac238 (chore) Upgrade jib from 3.4.3 to 3.4.4 2025-01-16 15:10:45 +01:00
Viktor Lofgren
1e770205a5 (search) Dyslexia fix 2025-01-12 20:40:14 +01:00
Viktor
e44ecd6d69 Merge pull request #149 from MarginaliaSearch/vlofgren-patch-1
Update ROADMAP.md
2025-01-12 20:38:36 +01:00
Viktor
5b93a0e633 Update ROADMAP.md 2025-01-12 20:38:11 +01:00
Viktor
08fb0e5efe Update ROADMAP.md 2025-01-12 20:37:43 +01:00
Viktor
bcf67782ea Update ROADMAP.md 2025-01-12 20:37:09 +01:00
Viktor Lofgren
ef3f175ede (search) Don't clobber the search query URL with default values 2025-01-10 15:57:30 +01:00
Viktor Lofgren
bbe4b5d9fd Revert experimental changes 2025-01-10 15:52:02 +01:00
Viktor Lofgren
c67a635103 (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:44:44 +01:00
Viktor Lofgren
20b24133fb (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:34:48 +01:00
Viktor Lofgren
f2567677e8 (index-client) Clean up index client code
Improve error handling.  This should be a relatively rare case, but we don't want one bad index partition to blow up the entire query.
2025-01-10 15:17:07 +01:00
Viktor Lofgren
bc2c2061f2 (index-client) Clean up index client code
This should have the rpc stream reception be performed in parallel in separate threads, rather blocking sequentially in the main thread, hopefully giving a slight performance boost.
2025-01-10 15:14:42 +01:00
Viktor Lofgren
1c7f5a31a5 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:17:29 +01:00
Viktor Lofgren
59a8ea60f7 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:15:22 +01:00
Viktor Lofgren
aa9b1244ea (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:56:04 +01:00
Viktor Lofgren
2d17233366 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:53:56 +01:00
Viktor Lofgren
b245cc9f38 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:46:19 +01:00
Viktor Lofgren
6614d05bdf (db) Make db pool size configurable 2025-01-09 20:20:51 +01:00
Viktor Lofgren
55aeb03c4a (feeds) Replace rssreader based parsing with a custom jsoup based rss parser
This solves some issues with the rssreader based parser, which was very picky about the XML being valid.  Jsoup is much more lenient when parsing malformed XML.
2025-01-09 18:29:55 +01:00
Viktor Lofgren
faa589962f (live-capture) Browserless now requires a token 2025-01-09 14:51:11 +01:00
Viktor Lofgren
c7edd6b39f (live-capture) Browserless now requires a token 2025-01-09 14:46:05 +01:00
Viktor Lofgren
79da622e3b (search) Update front page with new banner about move 2025-01-08 21:38:19 +01:00
Viktor Lofgren
3da8337ba6 (feeds) Add system property for exporting fetched feeds to a slop table for debugging 2025-01-08 20:49:16 +01:00
Viktor Lofgren
a32d230f0a (special) Trigger deployment 2025-01-08 20:07:54 +01:00
Viktor Lofgren
3772bfd387 (query) Fix handling of optional ranking parameters 2025-01-08 17:11:22 +01:00
Viktor Lofgren
02a7900d1a (search) Correct search-in-title toggle in search UI 2025-01-08 16:51:10 +01:00
Viktor Lofgren
a1fb92468f (refac) Remove ResultRankingParameters, QueryLimits class and use protobuf classes directly instead
This is primarily to make the code a bit easier to reason about, and will reduce the level of indirection and data copying in the search-servi->query-service->index-service communication chain.
2025-01-08 16:15:57 +01:00
Viktor Lofgren
b7f0a2a98e (search-service) Fix metrics for errors and request times
This was previously in place, but broke during the jooby migration.
2025-01-08 14:10:43 +01:00
Viktor Lofgren
5fb76b2e79 (search-service) Fix metrics for errors and request times
This was previously in place, but broke during the jooby migration.
2025-01-08 14:06:03 +01:00
Viktor Lofgren
ad8c97f342 (search-service) Begin replacement of the crawl queue mechanism with node_affinity flagging
Previously a special db table was used to hold domains slated for crawling, but this is deprecated, and instead now each domain has a node_affinity flag that decides its indexing state, where a value of -1 indicates it shouldn't be crawled, a value of 0 means it's slated for crawling by the next index partition to be crawled, and a positive value means it's assigned to an index partition.

The change set also adds a test case validating the modified behavior.
2025-01-08 13:25:56 +01:00
Viktor Lofgren
dc1b6373eb (search-service) Clean up readme 2025-01-08 13:04:39 +01:00
Viktor Lofgren
983d6d067c (search-service) Add indexing indicator to sibling domains listing 2025-01-08 12:58:34 +01:00
Viktor Lofgren
a84a06975c (ranking-params) Add disable penalties flag to ranking params
This will help debugging ranking issues.  Later it may be added to some filters.
2025-01-08 00:16:49 +01:00
Viktor Lofgren
d2864c13ec (query-params) Add additional permitted query params 2025-01-07 20:21:44 +01:00
Viktor Lofgren
03ba53ce51 (legacy-search) Update nav bar with correct links 2025-01-07 17:44:52 +01:00
Viktor Lofgren
d4a6684931 (specialization) Soften length requirements for wiki-specialized documents (incl. cppreference) 2025-01-07 15:53:25 +01:00
Viktor
6f0485287a Merge pull request #145 from MarginaliaSearch/cppreference_fixes
Cppreference fixes
2025-01-07 15:43:19 +01:00
Viktor Lofgren
59e2dd4c26 (specialization) Soften length requirements for wiki-specialized documents (incl. cppreference) 2025-01-07 15:41:30 +01:00
Viktor Lofgren
ca1807caae (specialization) Add new specialization for cppreference.com
Give this reference website some synthetically generated tokens to improve the likelihood of a good match.
2025-01-07 15:41:05 +01:00
Viktor Lofgren
26c20e18ac (keyword-extraction) Soften constraints on keyword patterns, allowing for longer segmented words 2025-01-07 15:20:50 +01:00
Viktor Lofgren
7c90b6b414 (query) Don't blindly make tokens containing a colon into a non-ranking advice term 2025-01-07 15:18:05 +01:00
Viktor Lofgren
b63c54c4ce (search) Update opensearch.xml to point to non-redirecting domains. 2025-01-07 00:23:09 +01:00
Viktor Lofgren
fecd2f4ec3 (deploy) Add legacy search service to deploy script 2025-01-07 00:21:13 +01:00
Viktor Lofgren
39e420de88 (search) Add wayback machine link to siteinfo 2025-01-06 20:33:10 +01:00
Viktor Lofgren
dc83619861 (rssreader) Further suppress logging 2025-01-06 20:20:37 +01:00
Viktor Lofgren
87d1c89701 (search) Add listing of sibling subdomains to site overview 2025-01-06 20:17:36 +01:00
Viktor Lofgren
a42a7769e2 (leagacy-search) Remove legacy paperdoll class 2025-01-06 20:17:36 +01:00
Viktor
202bda884f Update readme.md
Add note about installing tailwindcss via npm
2025-01-06 18:35:13 +01:00
Viktor Lofgren
47e58a21c6 Refactor documentBody method and ContentType charset handling
Updated the `documentBody` method to improve parsing retries and error handling. Refactored `ContentType` charset processing with cleaner logic, removing redundant handling for unsupported charsets. Also, updated the version of the `slop` library in dependency settings.
2024-12-17 17:11:37 +01:00
Viktor Lofgren
3714104976 Add loader for slop data in converter.
Also alter CrawledDocument to not require String parsing of the underlying byte[] data.  This should reduce the number of large memory allocations quite significantly, hopefully reducing the GC churn a bit.
2024-12-17 15:40:24 +01:00
Viktor Lofgren
f6f036b9b1 Switch to new Slop format for crawl data storage and processing.
Replaces Parquet output and processing with the new Slop-based format. Includes data migration functionality, updates to handling and writing of crawl data, and introduces support for SLOP in domain readers and converters.
2024-12-15 19:34:03 +01:00
Viktor Lofgren
b510b7feb8 Spike for storing crawl data in slop instead of parquet
This seems to reduce RAM overhead to 100s of MB (from ~2 GB), as well as roughly double the read speeds.  On disk size is virtually identical.
2024-12-15 15:49:47 +01:00
171 changed files with 3431 additions and 2546 deletions

View File

@@ -1,4 +1,4 @@
# Roadmap 2024-2025
# Roadmap 2025
This is a roadmap with major features planned for Marginalia Search.
@@ -30,12 +30,6 @@ Retaining the ability to independently crawl the web is still strongly desirable
The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) )
combined with naive bayesian filter would go a long way, or something more sophisticated...?
## Web Design Overhaul
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
In progress: PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127) -- demo available at https://test.marginalia.nu/
## Additional Language Support
It would be desirable if the search engine supported more languages than English. This is partially about
@@ -62,8 +56,31 @@ filter for any API consumer.
I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this.
## Show favicons next to search results
This is expected from search engines. Basic proof of concept sketch of fetching this data has been done, but the feature is some way from being reality.
## Specialized crawler for github
One of the search engine's biggest limitations right now is that it does not index github at all. A specialized crawler that fetches at least the readme.md would go a long way toward providing search capabilities in this domain.
# Completed
## Web Design Overhaul (COMPLETED 2025-01)
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)
## Proper Position Index (COMPLETED 2024-09)
The search engine uses a fixed width bit mask to indicate word positions. It has the benefit
@@ -76,11 +93,3 @@ list, as is the civilized way of doing this.
Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)

View File

@@ -5,7 +5,7 @@ plugins {
// This is a workaround for a bug in the Jib plugin that causes it to stall randomly
// https://github.com/GoogleContainerTools/jib/issues/3347
id 'com.google.cloud.tools.jib' version '3.4.3' apply(false)
id 'com.google.cloud.tools.jib' version '3.4.4' apply(false)
}
group 'marginalia'
@@ -47,7 +47,7 @@ ext {
dockerImageBase='container-registry.oracle.com/graalvm/jdk:23'
dockerImageTag='latest'
dockerImageRegistry='marginalia'
jibVersion = '3.4.3'
jibVersion = '3.4.4'
}

View File

@@ -8,18 +8,22 @@ import com.google.inject.Inject;
import com.google.inject.Singleton;
import com.zaxxer.hikari.HikariDataSource;
import nu.marginalia.model.EdgeDomain;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.sql.SQLException;
import java.util.NoSuchElementException;
import java.util.Optional;
import java.util.OptionalInt;
import java.util.*;
import java.util.concurrent.ExecutionException;
@Singleton
public class DbDomainQueries {
private final HikariDataSource dataSource;
private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class);
private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<Integer, EdgeDomain> domainNameCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<String, List<DomainWithNode>> siblingsCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
@Inject
public DbDomainQueries(HikariDataSource dataSource)
@@ -29,16 +33,21 @@ public class DbDomainQueries {
public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException {
try (var connection = dataSource.getConnection()) {
try {
return domainIdCache.get(domain, () -> {
try (var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery();
if (rsp.next()) {
return rsp.getInt(1);
}
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException();
});
}
@@ -48,9 +57,6 @@ public class DbDomainQueries {
catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause());
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
}
public OptionalInt tryGetDomainId(EdgeDomain domain) {
@@ -83,22 +89,60 @@ public class DbDomainQueries {
}
public Optional<EdgeDomain> getDomain(int id) {
try (var connection = dataSource.getConnection()) {
EdgeDomain existing = domainNameCache.getIfPresent(id);
if (existing != null) {
return Optional.of(existing);
}
try (var connection = dataSource.getConnection()) {
try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) {
stmt.setInt(1, id);
var rsp = stmt.executeQuery();
if (rsp.next()) {
return Optional.of(new EdgeDomain(rsp.getString(1)));
var val = new EdgeDomain(rsp.getString(1));
domainNameCache.put(id, val);
return Optional.of(val);
}
return Optional.empty();
}
}
catch (UncheckedExecutionException ex) {
throw new RuntimeException(ex.getCause());
}
catch (SQLException ex) {
throw new RuntimeException(ex);
}
}
public List<DomainWithNode> otherSubdomains(EdgeDomain domain, int cnt) throws ExecutionException {
String topDomain = domain.topDomain;
return siblingsCache.get(topDomain, () -> {
List<DomainWithNode> ret = new ArrayList<>();
try (var conn = dataSource.getConnection();
var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) {
stmt.setString(1, topDomain);
stmt.setInt(2, cnt);
var rs = stmt.executeQuery();
while (rs.next()) {
var sibling = new EdgeDomain(rs.getString(1));
if (sibling.equals(domain))
continue;
ret.add(new DomainWithNode(sibling, rs.getInt(2)));
}
} catch (SQLException e) {
logger.error("Failed to get domain neighbors");
}
return ret;
});
}
public record DomainWithNode (EdgeDomain domain, int nodeAffinity) {
public boolean isIndexed() {
return nodeAffinity > 0;
}
}
}

View File

@@ -1,118 +0,0 @@
package nu.marginalia.db;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.OptionalInt;
/** Class used in exporting data. This is intended to be used for a brief time
* and then discarded, not kept around as a service.
*/
public class DbDomainStatsExportMultitool implements AutoCloseable {
private final Connection connection;
private final int nodeId;
private final PreparedStatement knownUrlsQuery;
private final PreparedStatement visitedUrlsQuery;
private final PreparedStatement goodUrlsQuery;
private final PreparedStatement domainNameToId;
private final PreparedStatement allDomainsQuery;
private final PreparedStatement crawlQueueDomains;
private final PreparedStatement indexedDomainsQuery;
public DbDomainStatsExportMultitool(HikariDataSource dataSource, int nodeId) throws SQLException {
this.connection = dataSource.getConnection();
this.nodeId = nodeId;
knownUrlsQuery = connection.prepareStatement("""
SELECT KNOWN_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
visitedUrlsQuery = connection.prepareStatement("""
SELECT VISITED_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
goodUrlsQuery = connection.prepareStatement("""
SELECT GOOD_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
domainNameToId = connection.prepareStatement("""
SELECT ID
FROM EC_DOMAIN
WHERE DOMAIN_NAME=?
""");
allDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
""");
crawlQueueDomains = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM CRAWL_QUEUE
""");
indexedDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
WHERE INDEXED > 0
""");
}
public OptionalInt getVisitedUrls(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, visitedUrlsQuery);
}
public OptionalInt getDomainId(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, domainNameToId);
}
public List<String> getCrawlQueueDomains() throws SQLException {
return executeListQuery(crawlQueueDomains, 100);
}
public List<String> getAllIndexedDomains() throws SQLException {
return executeListQuery(indexedDomainsQuery, 100_000);
}
private OptionalInt executeNameToIntQuery(String domainName, PreparedStatement statement)
throws SQLException {
statement.setString(1, domainName);
var rs = statement.executeQuery();
if (rs.next()) {
return OptionalInt.of(rs.getInt(1));
}
return OptionalInt.empty();
}
private List<String> executeListQuery(PreparedStatement statement, int sizeHint) throws SQLException {
List<String> ret = new ArrayList<>(sizeHint);
var rs = statement.executeQuery();
while (rs.next()) {
ret.add(rs.getString(1));
}
return ret;
}
@Override
public void close() throws SQLException {
knownUrlsQuery.close();
goodUrlsQuery.close();
visitedUrlsQuery.close();
allDomainsQuery.close();
crawlQueueDomains.close();
domainNameToId.close();
connection.close();
}
}

View File

@@ -83,6 +83,11 @@ public class QueryParams {
if (path.endsWith("StoryView.py")) { // folklore.org is neat
return param.startsWith("project=") || param.startsWith("story=");
}
// www.perseus.tufts.edu:
if (param.startsWith("collection=")) return true;
if (param.startsWith("doc=")) return true;
return false;
}
}

View File

@@ -89,7 +89,7 @@ public class DatabaseModule extends AbstractModule {
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.setMaximumPoolSize(5);
config.setMaximumPoolSize(Integer.getInteger("db.poolSize", 5));
config.setMinimumIdle(2);
config.setMaxLifetime(Duration.ofMinutes(9).toMillis());

View File

@@ -20,6 +20,7 @@ public enum ExecutorActor {
EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
MIGRATE_CRAWL_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),
PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),

View File

@@ -66,6 +66,7 @@ public class ExecutorActorControlService {
DownloadSampleActor downloadSampleActor,
ScrapeFeedsActor scrapeFeedsActor,
ExecutorActorStateMachines stateMachines,
MigrateCrawlDataActor migrateCrawlDataActor,
ExportAllPrecessionActor exportAllPrecessionActor,
UpdateRssActor updateRssActor) throws SQLException {
this.messageQueueFactory = messageQueueFactory;
@@ -107,6 +108,8 @@ public class ExecutorActorControlService {
register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor);
register(ExecutorActor.UPDATE_RSS, updateRssActor);
register(ExecutorActor.MIGRATE_CRAWL_DATA, migrateCrawlDataActor);
if (serviceConfiguration.node() == 1) {
register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor);
}

View File

@@ -0,0 +1,130 @@
package nu.marginalia.actor.task;
import com.google.gson.Gson;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import nu.marginalia.actor.prototype.RecordActorPrototype;
import nu.marginalia.actor.state.ActorStep;
import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.process.log.WorkLog;
import nu.marginalia.process.log.WorkLogEntry;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId;
import org.apache.logging.log4j.util.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Map;
import java.util.Optional;
import java.util.function.Function;
@Singleton
public class MigrateCrawlDataActor extends RecordActorPrototype {
private final FileStorageService fileStorageService;
private static final Logger logger = LoggerFactory.getLogger(MigrateCrawlDataActor.class);
@Inject
public MigrateCrawlDataActor(Gson gson, FileStorageService fileStorageService) {
super(gson);
this.fileStorageService = fileStorageService;
}
public record Run(long fileStorageId) implements ActorStep {}
@Override
public ActorStep transition(ActorStep self) throws Exception {
return switch (self) {
case Run(long fileStorageId) -> {
FileStorage storage = fileStorageService.getStorage(FileStorageId.of(fileStorageId));
Path root = storage.asPath();
Path crawlerLog = root.resolve("crawler.log");
Path newCrawlerLog = Files.createTempFile(root, "crawler", ".migrate.log");
try (WorkLog workLog = new WorkLog(newCrawlerLog)) {
for (Map.Entry<WorkLogEntry, Path> item : WorkLog.iterableMap(crawlerLog, new CrawlDataLocator(root))) {
var entry = item.getKey();
var path = item.getValue();
logger.info("Converting {}", entry.id());
if (path.toFile().getName().endsWith(".parquet")) {
String domain = entry.id();
String id = Integer.toHexString(domain.hashCode());
Path outputFile = CrawlerOutputFile.createSlopPath(root, id, domain);
SlopCrawlDataRecord.convertFromParquet(path, outputFile);
workLog.setJobToFinished(entry.id(), outputFile.toString(), entry.cnt());
}
else {
workLog.setJobToFinished(entry.id(), path.toString(), entry.cnt());
}
}
}
Path oldCrawlerLog = Files.createTempFile(root, "crawler-", ".migrate.old.log");
Files.move(crawlerLog, oldCrawlerLog);
Files.move(newCrawlerLog, crawlerLog);
yield new End();
}
default -> new Error();
};
}
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Map.Entry<WorkLogEntry, Path>>> {
private final Path crawlRootDir;
CrawlDataLocator(Path crawlRootDir) {
this.crawlRootDir = crawlRootDir;
}
@Override
public Optional<Map.Entry<WorkLogEntry, Path>> apply(WorkLogEntry entry) {
var path = getCrawledFilePath(crawlRootDir, entry.path());
if (!Files.exists(path)) {
return Optional.empty();
}
try {
return Optional.of(Map.entry(entry, path));
}
catch (Exception ex) {
return Optional.empty();
}
}
private Path getCrawledFilePath(Path crawlDir, String fileName) {
int sp = fileName.lastIndexOf('/');
// Normalize the filename
if (sp >= 0 && sp + 1< fileName.length())
fileName = fileName.substring(sp + 1);
if (fileName.length() < 4)
fileName = Strings.repeat("0", 4 - fileName.length()) + fileName;
String sp1 = fileName.substring(0, 2);
String sp2 = fileName.substring(2, 4);
return crawlDir.resolve(sp1).resolve(sp2).resolve(fileName);
}
}
@Override
public String describe() {
return "Migrates crawl data to the latest format";
}
}

View File

@@ -29,6 +29,7 @@ dependencies {
implementation libs.jsoup
implementation project(':third-party:rssreader')
implementation libs.opencsv
implementation libs.slop
implementation libs.sqlite
implementation libs.bundles.slf4j
implementation libs.commons.lang3

View File

@@ -15,7 +15,9 @@ import java.util.Map;
/** Client for local browserless.io API */
public class BrowserlessClient implements AutoCloseable {
private static final Logger logger = LoggerFactory.getLogger(BrowserlessClient.class);
private static final String BROWSERLESS_TOKEN = System.getProperty("live-capture.browserless-token", "BROWSERLESS_TOKEN");
private final HttpClient httpClient = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_1_1)
@@ -36,7 +38,7 @@ public class BrowserlessClient implements AutoCloseable {
);
var request = HttpRequest.newBuilder()
.uri(browserlessURI.resolve("/content"))
.uri(browserlessURI.resolve("/content?token="+BROWSERLESS_TOKEN))
.method("POST", HttpRequest.BodyPublishers.ofString(
gson.toJson(requestData)
))
@@ -63,7 +65,7 @@ public class BrowserlessClient implements AutoCloseable {
);
var request = HttpRequest.newBuilder()
.uri(browserlessURI.resolve("/screenshot"))
.uri(browserlessURI.resolve("/screenshot?token="+BROWSERLESS_TOKEN))
.method("POST", HttpRequest.BodyPublishers.ofString(
gson.toJson(requestData)
))

View File

@@ -1,6 +1,6 @@
package nu.marginalia.rss.model;
import com.apptasticsoftware.rssreader.Item;
import nu.marginalia.rss.svc.SimpleFeedParser;
import org.apache.commons.lang3.StringUtils;
import org.jetbrains.annotations.NotNull;
import org.jsoup.Jsoup;
@@ -18,37 +18,33 @@ public record FeedItem(String title,
public static final int MAX_DESC_LENGTH = 255;
public static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
public static FeedItem fromItem(Item item, boolean keepFragment) {
String title = item.getTitle().orElse("");
public static FeedItem fromItem(SimpleFeedParser.ItemData item, boolean keepFragment) {
String title = item.title();
String date = getItemDate(item);
String description = getItemDescription(item);
String url;
if (keepFragment || item.getLink().isEmpty()) {
url = item.getLink().orElse("");
if (keepFragment) {
url = item.url();
}
else {
try {
String link = item.getLink().get();
String link = item.url();
var linkUri = new URI(link);
var cleanUri = new URI(linkUri.getScheme(), linkUri.getAuthority(), linkUri.getPath(), linkUri.getQuery(), null);
url = cleanUri.toString();
}
catch (Exception e) {
// fallback to original link if we can't clean it, this is not a very important step
url = item.getLink().get();
url = item.url();
}
}
return new FeedItem(title, date, description, url);
}
private static String getItemDescription(Item item) {
Optional<String> description = item.getDescription();
if (description.isEmpty())
return "";
String rawDescription = description.get();
private static String getItemDescription(SimpleFeedParser.ItemData item) {
String rawDescription = item.description();
if (rawDescription.indexOf('<') >= 0) {
rawDescription = Jsoup.parseBodyFragment(rawDescription).text();
}
@@ -58,15 +54,18 @@ public record FeedItem(String title,
// e.g. http://fabiensanglard.net/rss.xml does dates like this: 1 Apr 2021 00:00:00 +0000
private static final DateTimeFormatter extraFormatter = DateTimeFormatter.ofPattern("d MMM yyyy HH:mm:ss Z");
private static String getItemDate(Item item) {
private static String getItemDate(SimpleFeedParser.ItemData item) {
Optional<ZonedDateTime> zonedDateTime = Optional.empty();
try {
zonedDateTime = item.getPubDateZonedDateTime();
}
catch (Exception e) {
zonedDateTime = item.getPubDate()
.map(extraFormatter::parse)
.map(ZonedDateTime::from);
try {
zonedDateTime = Optional.of(ZonedDateTime.from(extraFormatter.parse(item.pubDate())));
}
catch (Exception e2) {
// ignore
}
}
return zonedDateTime.map(date -> date.format(DATE_FORMAT)).orElse("");

View File

@@ -1,7 +1,5 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.Item;
import com.apptasticsoftware.rssreader.RssReader;
import com.google.inject.Inject;
import com.opencsv.CSVReader;
import nu.marginalia.WmsaHome;
@@ -20,7 +18,6 @@ import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageType;
import nu.marginalia.util.SimpleBlockingThreadPool;
import org.apache.commons.io.input.BOMInputStream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -32,7 +29,6 @@ import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets;
import java.sql.SQLException;
import java.time.*;
import java.time.format.DateTimeFormatter;
@@ -48,8 +44,6 @@ public class FeedFetcherService {
private static final int MAX_FEED_ITEMS = 10;
private static final Logger logger = LoggerFactory.getLogger(FeedFetcherService.class);
private final RssReader rssReader = new RssReader();
private final FeedDb feedDb;
private final FileStorageService fileStorageService;
private final NodeConfigurationService nodeConfigurationService;
@@ -72,17 +66,6 @@ public class FeedFetcherService {
this.nodeConfigurationService = nodeConfigurationService;
this.serviceHeartbeat = serviceHeartbeat;
this.executorClient = executorClient;
// Add support for some alternate date tags for atom
rssReader.addItemExtension("issued", this::setDateFallback);
rssReader.addItemExtension("created", this::setDateFallback);
}
private void setDateFallback(Item item, String value) {
if (item.getPubDate().isEmpty()) {
item.setPubDate(value);
}
}
public enum UpdateMode {
@@ -96,6 +79,7 @@ public class FeedFetcherService {
throw new IllegalStateException("Already updating feeds, refusing to start another update");
}
try (FeedDbWriter writer = feedDb.createWriter();
HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(15))
@@ -103,6 +87,7 @@ public class FeedFetcherService {
.followRedirects(HttpClient.Redirect.NORMAL)
.version(HttpClient.Version.HTTP_2)
.build();
FeedJournal feedJournal = FeedJournal.create();
var heartbeat = serviceHeartbeat.createServiceAdHocTaskHeartbeat("Update Rss Feeds")
) {
updating = true;
@@ -155,6 +140,8 @@ public class FeedFetcherService {
case FetchResult.Success(String value, String etag) -> {
writer.saveEtag(feed.domain(), etag);
writer.saveFeed(parseFeed(value, feed));
feedJournal.record(feed.feedUrl(), value);
}
case FetchResult.NotModified() -> {
writer.saveEtag(feed.domain(), ifNoneMatchTag);
@@ -367,12 +354,7 @@ public class FeedFetcherService {
public FeedItems parseFeed(String feedData, FeedDefinition definition) {
try {
feedData = sanitizeEntities(feedData);
List<Item> rawItems = rssReader.read(
// Massage the data to maximize the possibility of the flaky XML parser consuming it
new BOMInputStream(new ByteArrayInputStream(feedData.trim().getBytes(StandardCharsets.UTF_8)), false)
).toList();
List<SimpleFeedParser.ItemData> rawItems = SimpleFeedParser.parse(feedData);
boolean keepUriFragment = rawItems.size() < 2 || areFragmentsDisparate(rawItems);
@@ -395,33 +377,6 @@ public class FeedFetcherService {
}
}
private static final Map<String, String> HTML_ENTITIES = Map.of(
"&raquo;", "»",
"&laquo;", "«",
"&mdash;", "--",
"&ndash;", "-",
"&rsquo;", "'",
"&lsquo;", "'",
"&quot;", "\"",
"&nbsp;", ""
);
/** The XML parser will blow up if you insert HTML entities in the feed XML,
* which is unfortunately relatively common. Replace them as far as is possible
* with their corresponding characters
*/
static String sanitizeEntities(String feedData) {
String result = feedData;
for (Map.Entry<String, String> entry : HTML_ENTITIES.entrySet()) {
result = result.replace(entry.getKey(), entry.getValue());
}
// Handle lone ampersands not part of a recognized XML entity
result = result.replaceAll("&(?!(amp|lt|gt|apos|quot);)", "&amp;");
return result;
}
/** Decide whether to keep URI fragments in the feed items.
* <p></p>
* We keep fragments if there are multiple different fragments in the items.
@@ -429,16 +384,16 @@ public class FeedFetcherService {
* @param items The items to check
* @return True if we should keep the fragments, false otherwise
*/
private boolean areFragmentsDisparate(List<Item> items) {
private boolean areFragmentsDisparate(List<SimpleFeedParser.ItemData> items) {
Set<String> seenFragments = new HashSet<>();
try {
for (var item : items) {
if (item.getLink().isEmpty()) {
if (item.url().isBlank()) {
continue;
}
var link = item.getLink().get();
var link = item.url();
if (!link.contains("#")) {
continue;
}

View File

@@ -0,0 +1,76 @@
package nu.marginalia.rss.svc;
import nu.marginalia.WmsaHome;
import nu.marginalia.slop.SlopTable;
import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.desc.StorageType;
import org.apache.commons.io.FileUtils;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.function.BiConsumer;
/** Utility for recording fetched feeds to a journal, useful in debugging feed parser issues.
*/
public interface FeedJournal extends AutoCloseable {
StringColumn urlColumn = new StringColumn("url");
StringColumn contentsColumn = new StringColumn("contents", StandardCharsets.UTF_8, StorageType.ZSTD);
void record(String url, String contents) throws IOException;
void close() throws IOException;
static FeedJournal create() throws IOException {
if (Boolean.getBoolean("feedFetcher.persistJournal")) {
Path journalPath = WmsaHome.getDataPath().resolve("feed-journal");
if (Files.isDirectory(journalPath)) {
FileUtils.deleteDirectory(journalPath.toFile());
}
Files.createDirectories(journalPath);
return new RecordingFeedJournal(journalPath);
}
else {
return new NoOpFeedJournal();
}
}
class NoOpFeedJournal implements FeedJournal {
@Override
public void record(String url, String contents) {}
@Override
public void close() {}
}
class RecordingFeedJournal extends SlopTable implements FeedJournal {
private final StringColumn.Writer urlWriter;
private final StringColumn.Writer contentsWriter;
public RecordingFeedJournal(Path path) throws IOException {
super(path, SlopTable.getNumPages(path, FeedJournal.urlColumn));
urlWriter = urlColumn.create(this);
contentsWriter = contentsColumn.create(this);
}
public synchronized void record(String url, String contents) throws IOException {
urlWriter.put(url);
contentsWriter.put(contents);
}
}
static void replay(Path journalPath, BiConsumer<String, String> urlAndContent) throws IOException {
try (SlopTable table = new SlopTable(journalPath)) {
final StringColumn.Reader urlReader = urlColumn.open(table);
final StringColumn.Reader contentsReader = contentsColumn.open(table);
while (urlReader.hasRemaining()) {
urlAndContent.accept(urlReader.get(), contentsReader.get());
}
}
}
}

View File

@@ -0,0 +1,94 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.DateTimeParser;
import com.apptasticsoftware.rssreader.util.Default;
import org.jsoup.Jsoup;
import org.jsoup.parser.Parser;
import java.time.ZonedDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
public class SimpleFeedParser {
private static final DateTimeParser dateTimeParser = Default.getDateTimeParser();
public record ItemData (
String title,
String description,
String url,
String pubDate
) {
public boolean isWellFormed() {
return title != null && !title.isBlank() &&
description != null && !description.isBlank() &&
url != null && !url.isBlank() &&
pubDate != null && !pubDate.isBlank();
}
public Optional<ZonedDateTime> getPubDateZonedDateTime() {
try {
return Optional.ofNullable(dateTimeParser.parse(pubDate()));
}
catch (Exception e) {
return Optional.empty();
}
}
}
public static List<ItemData> parse(String content) {
var doc = Jsoup.parse(content, Parser.xmlParser());
List<ItemData> ret = new ArrayList<>();
doc.select("item, entry").forEach(element -> {
String link = "";
String title = "";
String description = "";
String pubDate = "";
for (String attr : List.of("title", "dc:title")) {
if (!title.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
title = tag.text();
}
}
for (String attr : List.of("title", "summary", "content", "description", "dc:description")) {
if (!description.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
description = tag.text();
}
}
for (String attr : List.of("pubDate", "published", "updated", "issued", "created", "dc:date")) {
if (!pubDate.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
pubDate = tag.text();
}
}
for (String attr : List.of("link", "url")) {
if (!link.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
link = tag.text();
}
}
ret.add(new ItemData(title, description, link, pubDate));
});
return ret;
}
}

View File

@@ -2,16 +2,21 @@ package nu.marginalia.livecapture;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
import java.net.URI;
import java.util.Map;
@Testcontainers
@Tag("slow")
public class BrowserlessClientTest {
static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome")).withExposedPorts(3000);
static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome"))
.withEnv(Map.of("TOKEN", "BROWSERLESS_TOKEN"))
.withExposedPorts(3000);
@BeforeAll
public static void setup() {

View File

@@ -1,50 +0,0 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.Item;
import com.apptasticsoftware.rssreader.RssReader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import java.util.List;
import java.util.Optional;
public class TestXmlSanitization {
@Test
public void testPreservedEntities() {
Assertions.assertEquals("&amp;", FeedFetcherService.sanitizeEntities("&amp;"));
Assertions.assertEquals("&lt;", FeedFetcherService.sanitizeEntities("&lt;"));
Assertions.assertEquals("&gt;", FeedFetcherService.sanitizeEntities("&gt;"));
Assertions.assertEquals("&apos;", FeedFetcherService.sanitizeEntities("&apos;"));
}
@Test
public void testNlnetTitleTag() {
// The NLnet atom feed puts HTML tags in the entry/title tags, which breaks the vanilla RssReader code
// Verify we're able to consume and strip out the HTML tags
RssReader r = new RssReader();
List<Item> items = r.read(ClassLoader.getSystemResourceAsStream("nlnet.atom")).toList();
Assertions.assertEquals(1, items.size());
for (var item : items) {
Assertions.assertEquals(Optional.of("50 Free and Open Source Projects Selected for NGI Zero grants"), item.getTitle());
}
}
@Test
public void testStrayAmpersand() {
Assertions.assertEquals("Bed &amp; Breakfast", FeedFetcherService.sanitizeEntities("Bed & Breakfast"));
}
@Test
public void testTranslatedHtmlEntity() {
Assertions.assertEquals("Foo -- Bar", FeedFetcherService.sanitizeEntities("Foo &mdash; Bar"));
}
@Test
public void testTranslatedHtmlEntityQuot() {
Assertions.assertEquals("\"Bob\"", FeedFetcherService.sanitizeEntities("&quot;Bob&quot;"));
}
}

View File

@@ -2,9 +2,6 @@ package nu.marginalia.api.searchquery;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.results.Bm25Parameters;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.index.query.limit.SpecificationLimitType;
@@ -27,37 +24,19 @@ public class IndexProtobufCodec {
.build();
}
public static QueryLimits convertQueryLimits(RpcQueryLimits queryLimits) {
return new QueryLimits(
queryLimits.getResultsByDomain(),
queryLimits.getResultsTotal(),
queryLimits.getTimeoutMs(),
queryLimits.getFetchSize()
);
}
public static RpcQueryLimits convertQueryLimits(QueryLimits queryLimits) {
return RpcQueryLimits.newBuilder()
.setResultsByDomain(queryLimits.resultsByDomain())
.setResultsTotal(queryLimits.resultsTotal())
.setTimeoutMs(queryLimits.timeoutMs())
.setFetchSize(queryLimits.fetchSize())
.build();
}
public static SearchQuery convertRpcQuery(RpcQuery query) {
List<SearchPhraseConstraint> phraeConstraints = new ArrayList<>();
List<SearchPhraseConstraint> phraseConstraints = new ArrayList<>();
for (int j = 0; j < query.getPhrasesCount(); j++) {
var coh = query.getPhrases(j);
if (coh.getType() == RpcPhrases.TYPE.OPTIONAL) {
phraeConstraints.add(new SearchPhraseConstraint.Optional(List.copyOf(coh.getTermsList())));
phraseConstraints.add(new SearchPhraseConstraint.Optional(List.copyOf(coh.getTermsList())));
}
else if (coh.getType() == RpcPhrases.TYPE.MANDATORY) {
phraeConstraints.add(new SearchPhraseConstraint.Mandatory(List.copyOf(coh.getTermsList())));
phraseConstraints.add(new SearchPhraseConstraint.Mandatory(List.copyOf(coh.getTermsList())));
}
else if (coh.getType() == RpcPhrases.TYPE.FULL) {
phraeConstraints.add(new SearchPhraseConstraint.Full(List.copyOf(coh.getTermsList())));
phraseConstraints.add(new SearchPhraseConstraint.Full(List.copyOf(coh.getTermsList())));
}
else {
throw new IllegalArgumentException("Unknown phrase constraint type: " + coh.getType());
@@ -70,7 +49,7 @@ public class IndexProtobufCodec {
query.getExcludeList(),
query.getAdviceList(),
query.getPriorityList(),
phraeConstraints
phraseConstraints
);
}
@@ -103,60 +82,4 @@ public class IndexProtobufCodec {
return subqueryBuilder.build();
}
public static ResultRankingParameters convertRankingParameterss(RpcResultRankingParameters params) {
if (params == null)
return ResultRankingParameters.sensibleDefaults();
return new ResultRankingParameters(
new Bm25Parameters(params.getBm25K(), params.getBm25B()),
params.getShortDocumentThreshold(),
params.getShortDocumentPenalty(),
params.getDomainRankBonus(),
params.getQualityPenalty(),
params.getShortSentenceThreshold(),
params.getShortSentencePenalty(),
params.getBm25Weight(),
params.getTcfFirstPositionWeight(),
params.getTcfVerbatimWeight(),
params.getTcfProximityWeight(),
ResultRankingParameters.TemporalBias.valueOf(params.getTemporalBias().getBias().name()),
params.getTemporalBiasWeight(),
params.getExportDebugData()
);
}
public static RpcResultRankingParameters convertRankingParameterss(ResultRankingParameters rankingParams,
RpcTemporalBias temporalBias)
{
if (rankingParams == null) {
rankingParams = ResultRankingParameters.sensibleDefaults();
}
var builder = RpcResultRankingParameters.newBuilder()
.setBm25B(rankingParams.bm25Params.b())
.setBm25K(rankingParams.bm25Params.k())
.setShortDocumentThreshold(rankingParams.shortDocumentThreshold)
.setShortDocumentPenalty(rankingParams.shortDocumentPenalty)
.setDomainRankBonus(rankingParams.domainRankBonus)
.setQualityPenalty(rankingParams.qualityPenalty)
.setShortSentenceThreshold(rankingParams.shortSentenceThreshold)
.setShortSentencePenalty(rankingParams.shortSentencePenalty)
.setBm25Weight(rankingParams.bm25Weight)
.setTcfFirstPositionWeight(rankingParams.tcfFirstPosition)
.setTcfProximityWeight(rankingParams.tcfProximity)
.setTcfVerbatimWeight(rankingParams.tcfVerbatim)
.setTemporalBiasWeight(rankingParams.temporalBiasWeight)
.setExportDebugData(rankingParams.exportDebugData);
if (temporalBias != null && temporalBias.getBias() != RpcTemporalBias.Bias.NONE) {
builder.setTemporalBias(temporalBias);
}
else {
builder.setTemporalBias(RpcTemporalBias.newBuilder()
.setBias(RpcTemporalBias.Bias.valueOf(rankingParams.temporalBias.name())));
}
return builder.build();
}
}

View File

@@ -5,7 +5,7 @@ import nu.marginalia.api.searchquery.model.query.QueryParams;
import nu.marginalia.api.searchquery.model.query.QueryResponse;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.DecoratedSearchResultItem;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.api.searchquery.model.results.SearchResultItem;
import nu.marginalia.api.searchquery.model.results.SearchResultKeywordScore;
import nu.marginalia.api.searchquery.model.results.debug.DebugFactor;
@@ -37,7 +37,7 @@ public class QueryProtobufCodec {
builder.setSize(IndexProtobufCodec.convertSpecLimit(query.specs.size));
builder.setRank(IndexProtobufCodec.convertSpecLimit(query.specs.rank));
builder.setQueryLimits(IndexProtobufCodec.convertQueryLimits(query.specs.queryLimits));
builder.setQueryLimits(query.specs.queryLimits);
// Query strategy may be overridden by the query, but if not, use the one from the request
if (query.specs.queryStrategy != null && query.specs.queryStrategy != QueryStrategy.AUTO)
@@ -45,9 +45,27 @@ public class QueryProtobufCodec {
else
builder.setQueryStrategy(request.getQueryStrategy());
if (request.getTemporalBias().getBias() != RpcTemporalBias.Bias.NONE) {
if (query.specs.rankingParams != null) {
builder.setParameters(IndexProtobufCodec.convertRankingParameterss(query.specs.rankingParams, request.getTemporalBias()));
builder.setParameters(
RpcResultRankingParameters.newBuilder(query.specs.rankingParams)
.setTemporalBias(request.getTemporalBias())
.build()
);
} else {
builder.setParameters(
RpcResultRankingParameters.newBuilder(PrototypeRankingParameters.sensibleDefaults())
.setTemporalBias(request.getTemporalBias())
.build()
);
}
} else if (query.specs.rankingParams != null) {
builder.setParameters(query.specs.rankingParams);
}
// else {
// if we have no ranking params, we don't need to set them, the client check and use the default values
// so we don't need to send this huge object over the wire
// }
return builder.build();
}
@@ -65,18 +83,13 @@ public class QueryProtobufCodec {
builder.setSize(IndexProtobufCodec.convertSpecLimit(query.specs.size));
builder.setRank(IndexProtobufCodec.convertSpecLimit(query.specs.rank));
builder.setQueryLimits(IndexProtobufCodec.convertQueryLimits(query.specs.queryLimits));
builder.setQueryLimits(query.specs.queryLimits);
// Query strategy may be overridden by the query, but if not, use the one from the request
builder.setQueryStrategy(query.specs.queryStrategy.name());
if (query.specs.rankingParams != null) {
builder.setParameters(IndexProtobufCodec.convertRankingParameterss(
query.specs.rankingParams,
RpcTemporalBias.newBuilder().setBias(
RpcTemporalBias.Bias.NONE)
.build())
);
builder.setParameters(query.specs.rankingParams);
}
return builder.build();
@@ -95,10 +108,10 @@ public class QueryProtobufCodec {
IndexProtobufCodec.convertSpecLimit(request.getSize()),
IndexProtobufCodec.convertSpecLimit(request.getRank()),
request.getDomainIdsList(),
IndexProtobufCodec.convertQueryLimits(request.getQueryLimits()),
request.getQueryLimits(),
request.getSearchSetIdentifier(),
QueryStrategy.valueOf(request.getQueryStrategy()),
ResultRankingParameters.TemporalBias.valueOf(request.getTemporalBias().getBias().name()),
RpcTemporalBias.Bias.valueOf(request.getTemporalBias().getBias().name()),
request.getPagination().getPage()
);
}
@@ -294,9 +307,9 @@ public class QueryProtobufCodec {
IndexProtobufCodec.convertSpecLimit(specs.getYear()),
IndexProtobufCodec.convertSpecLimit(specs.getSize()),
IndexProtobufCodec.convertSpecLimit(specs.getRank()),
IndexProtobufCodec.convertQueryLimits(specs.getQueryLimits()),
specs.getQueryLimits(),
QueryStrategy.valueOf(specs.getQueryStrategy()),
IndexProtobufCodec.convertRankingParameterss(specs.getParameters())
specs.hasParameters() ? specs.getParameters() : null
);
}
@@ -307,7 +320,7 @@ public class QueryProtobufCodec {
.addAllTacitExcludes(params.tacitExcludes())
.addAllTacitPriority(params.tacitPriority())
.setHumanQuery(params.humanQuery())
.setQueryLimits(IndexProtobufCodec.convertQueryLimits(params.limits()))
.setQueryLimits(params.limits())
.setQuality(IndexProtobufCodec.convertSpecLimit(params.quality()))
.setYear(IndexProtobufCodec.convertSpecLimit(params.year()))
.setSize(IndexProtobufCodec.convertSpecLimit(params.size()))
@@ -319,7 +332,7 @@ public class QueryProtobufCodec {
.build())
.setPagination(RpcQsQueryPagination.newBuilder()
.setPage(params.page())
.setPageSize(Math.min(100, params.limits().resultsTotal()))
.setPageSize(Math.min(100, params.limits().getResultsTotal()))
.build());
if (params.nearDomain() != null)

View File

@@ -1,7 +1,7 @@
package nu.marginalia.api.searchquery.model.query;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.RpcTemporalBias;
import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit;
@@ -21,14 +21,14 @@ public record QueryParams(
SpecificationLimit size,
SpecificationLimit rank,
List<Integer> domainIds,
QueryLimits limits,
RpcQueryLimits limits,
String identifier,
QueryStrategy queryStrategy,
ResultRankingParameters.TemporalBias temporalBias,
RpcTemporalBias.Bias temporalBias,
int page
)
{
public QueryParams(String query, QueryLimits limits, String identifier) {
public QueryParams(String query, RpcQueryLimits limits, String identifier) {
this(query, null,
List.of(),
List.of(),
@@ -42,7 +42,7 @@ public record QueryParams(
limits,
identifier,
QueryStrategy.AUTO,
ResultRankingParameters.TemporalBias.NONE,
RpcTemporalBias.Bias.NONE,
1 // page
);
}

View File

@@ -1,10 +1,11 @@
package nu.marginalia.api.searchquery.model.query;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit;
import javax.annotation.Nullable;
import java.util.List;
public class SearchSpecification {
@@ -24,11 +25,12 @@ public class SearchSpecification {
public SpecificationLimit size;
public SpecificationLimit rank;
public final QueryLimits queryLimits;
public final RpcQueryLimits queryLimits;
public final QueryStrategy queryStrategy;
public final ResultRankingParameters rankingParams;
@Nullable
public final RpcResultRankingParameters rankingParams;
public SearchSpecification(SearchQuery query,
List<Integer> domains,
@@ -38,9 +40,9 @@ public class SearchSpecification {
SpecificationLimit year,
SpecificationLimit size,
SpecificationLimit rank,
QueryLimits queryLimits,
RpcQueryLimits queryLimits,
QueryStrategy queryStrategy,
ResultRankingParameters rankingParams)
@Nullable RpcResultRankingParameters rankingParams)
{
this.query = query;
this.domains = domains;
@@ -91,7 +93,7 @@ public class SearchSpecification {
return this.rank;
}
public QueryLimits getQueryLimits() {
public RpcQueryLimits getQueryLimits() {
return this.queryLimits;
}
@@ -99,7 +101,7 @@ public class SearchSpecification {
return this.queryStrategy;
}
public ResultRankingParameters getRankingParams() {
public RpcResultRankingParameters getRankingParams() {
return this.rankingParams;
}
@@ -120,9 +122,9 @@ public class SearchSpecification {
private boolean size$set;
private SpecificationLimit rank$value;
private boolean rank$set;
private QueryLimits queryLimits;
private RpcQueryLimits queryLimits;
private QueryStrategy queryStrategy;
private ResultRankingParameters rankingParams;
private RpcResultRankingParameters rankingParams;
SearchSpecificationBuilder() {
}
@@ -171,7 +173,7 @@ public class SearchSpecification {
return this;
}
public SearchSpecificationBuilder queryLimits(QueryLimits queryLimits) {
public SearchSpecificationBuilder queryLimits(RpcQueryLimits queryLimits) {
this.queryLimits = queryLimits;
return this;
}
@@ -181,7 +183,7 @@ public class SearchSpecification {
return this;
}
public SearchSpecificationBuilder rankingParams(ResultRankingParameters rankingParams) {
public SearchSpecificationBuilder rankingParams(RpcResultRankingParameters rankingParams) {
this.rankingParams = rankingParams;
return this;
}

View File

@@ -0,0 +1,33 @@
package nu.marginalia.api.searchquery.model.results;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.RpcTemporalBias;
public class PrototypeRankingParameters {
/** These are the default ranking parameters that are used when no parameters are specified. */
private static final RpcResultRankingParameters _sensibleDefaults = RpcResultRankingParameters.newBuilder()
.setBm25B(0.5)
.setBm25K(1.2)
.setShortDocumentThreshold(2000)
.setShortDocumentPenalty(2.)
.setDomainRankBonus(1 / 100.)
.setQualityPenalty(1 / 15.)
.setShortSentenceThreshold(2)
.setShortSentencePenalty(5)
.setBm25Weight(1.)
.setTcfVerbatimWeight(1.)
.setTcfProximityWeight(1.)
.setTcfFirstPositionWeight(5)
.setTemporalBias(RpcTemporalBias.newBuilder().setBias(RpcTemporalBias.Bias.NONE))
.setTemporalBiasWeight(5.0)
.setExportDebugData(false)
.setDisablePenalties(false)
.build();
public static RpcResultRankingParameters sensibleDefaults() {
return _sensibleDefaults;
}
}

View File

@@ -1,12 +1,13 @@
package nu.marginalia.api.searchquery.model.results;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import java.util.BitSet;
public class ResultRankingContext {
private final int docCount;
public final ResultRankingParameters params;
public final RpcResultRankingParameters params;
public final BitSet regularMask;
@@ -21,7 +22,7 @@ public class ResultRankingContext {
public final CqDataInt priorityCounts;
public ResultRankingContext(int docCount,
ResultRankingParameters params,
RpcResultRankingParameters params,
BitSet ngramsMask,
BitSet regularMask,
CqDataInt fullCounts,

View File

@@ -1,278 +0,0 @@
package nu.marginalia.api.searchquery.model.results;
import java.util.Objects;
public class ResultRankingParameters {
/**
* Tuning for BM25 when applied to full document matches
*/
public final Bm25Parameters bm25Params;
/**
* Documents below this length are penalized
*/
public int shortDocumentThreshold;
public double shortDocumentPenalty;
/**
* Scaling factor associated with domain rank (unscaled rank value is 0-255; high is good)
*/
public double domainRankBonus;
/**
* Scaling factor associated with document quality (unscaled rank value is 0-15; high is bad)
*/
public double qualityPenalty;
/**
* Average sentence length values below this threshold are penalized, range [0-4), 2 or 3 is probably what you want
*/
public int shortSentenceThreshold;
/**
* Magnitude of penalty for documents with low average sentence length
*/
public double shortSentencePenalty;
public double bm25Weight;
public double tcfFirstPosition;
public double tcfVerbatim;
public double tcfProximity;
public TemporalBias temporalBias;
public double temporalBiasWeight;
public boolean exportDebugData;
public ResultRankingParameters(Bm25Parameters bm25Params, int shortDocumentThreshold, double shortDocumentPenalty, double domainRankBonus, double qualityPenalty, int shortSentenceThreshold, double shortSentencePenalty, double bm25Weight, double tcfFirstPosition, double tcfVerbatim, double tcfProximity, TemporalBias temporalBias, double temporalBiasWeight, boolean exportDebugData) {
this.bm25Params = bm25Params;
this.shortDocumentThreshold = shortDocumentThreshold;
this.shortDocumentPenalty = shortDocumentPenalty;
this.domainRankBonus = domainRankBonus;
this.qualityPenalty = qualityPenalty;
this.shortSentenceThreshold = shortSentenceThreshold;
this.shortSentencePenalty = shortSentencePenalty;
this.bm25Weight = bm25Weight;
this.tcfFirstPosition = tcfFirstPosition;
this.tcfVerbatim = tcfVerbatim;
this.tcfProximity = tcfProximity;
this.temporalBias = temporalBias;
this.temporalBiasWeight = temporalBiasWeight;
this.exportDebugData = exportDebugData;
}
public static ResultRankingParameters sensibleDefaults() {
return builder()
.bm25Params(new Bm25Parameters(1.2, 0.5))
.shortDocumentThreshold(2000)
.shortDocumentPenalty(2.)
.domainRankBonus(1 / 100.)
.qualityPenalty(1 / 15.)
.shortSentenceThreshold(2)
.shortSentencePenalty(5)
.bm25Weight(1.)
.tcfVerbatim(1.)
.tcfProximity(1.)
.tcfFirstPosition(5)
.temporalBias(TemporalBias.NONE)
.temporalBiasWeight(5.0)
.exportDebugData(false)
.build();
}
public static ResultRankingParametersBuilder builder() {
return new ResultRankingParametersBuilder();
}
public Bm25Parameters getBm25Params() {
return this.bm25Params;
}
public int getShortDocumentThreshold() {
return this.shortDocumentThreshold;
}
public double getShortDocumentPenalty() {
return this.shortDocumentPenalty;
}
public double getDomainRankBonus() {
return this.domainRankBonus;
}
public double getQualityPenalty() {
return this.qualityPenalty;
}
public int getShortSentenceThreshold() {
return this.shortSentenceThreshold;
}
public double getShortSentencePenalty() {
return this.shortSentencePenalty;
}
public double getBm25Weight() {
return this.bm25Weight;
}
public double getTcfFirstPosition() {
return this.tcfFirstPosition;
}
public double getTcfVerbatim() {
return this.tcfVerbatim;
}
public double getTcfProximity() {
return this.tcfProximity;
}
public TemporalBias getTemporalBias() {
return this.temporalBias;
}
public double getTemporalBiasWeight() {
return this.temporalBiasWeight;
}
public boolean isExportDebugData() {
return this.exportDebugData;
}
@Override
public final boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof ResultRankingParameters that)) return false;
return shortDocumentThreshold == that.shortDocumentThreshold && Double.compare(shortDocumentPenalty, that.shortDocumentPenalty) == 0 && Double.compare(domainRankBonus, that.domainRankBonus) == 0 && Double.compare(qualityPenalty, that.qualityPenalty) == 0 && shortSentenceThreshold == that.shortSentenceThreshold && Double.compare(shortSentencePenalty, that.shortSentencePenalty) == 0 && Double.compare(bm25Weight, that.bm25Weight) == 0 && Double.compare(tcfFirstPosition, that.tcfFirstPosition) == 0 && Double.compare(tcfVerbatim, that.tcfVerbatim) == 0 && Double.compare(tcfProximity, that.tcfProximity) == 0 && Double.compare(temporalBiasWeight, that.temporalBiasWeight) == 0 && exportDebugData == that.exportDebugData && Objects.equals(bm25Params, that.bm25Params) && temporalBias == that.temporalBias;
}
@Override
public int hashCode() {
int result = Objects.hashCode(bm25Params);
result = 31 * result + shortDocumentThreshold;
result = 31 * result + Double.hashCode(shortDocumentPenalty);
result = 31 * result + Double.hashCode(domainRankBonus);
result = 31 * result + Double.hashCode(qualityPenalty);
result = 31 * result + shortSentenceThreshold;
result = 31 * result + Double.hashCode(shortSentencePenalty);
result = 31 * result + Double.hashCode(bm25Weight);
result = 31 * result + Double.hashCode(tcfFirstPosition);
result = 31 * result + Double.hashCode(tcfVerbatim);
result = 31 * result + Double.hashCode(tcfProximity);
result = 31 * result + Objects.hashCode(temporalBias);
result = 31 * result + Double.hashCode(temporalBiasWeight);
result = 31 * result + Boolean.hashCode(exportDebugData);
return result;
}
public String toString() {
return "ResultRankingParameters(bm25Params=" + this.getBm25Params() + ", shortDocumentThreshold=" + this.getShortDocumentThreshold() + ", shortDocumentPenalty=" + this.getShortDocumentPenalty() + ", domainRankBonus=" + this.getDomainRankBonus() + ", qualityPenalty=" + this.getQualityPenalty() + ", shortSentenceThreshold=" + this.getShortSentenceThreshold() + ", shortSentencePenalty=" + this.getShortSentencePenalty() + ", bm25Weight=" + this.getBm25Weight() + ", tcfFirstPosition=" + this.getTcfFirstPosition() + ", tcfVerbatim=" + this.getTcfVerbatim() + ", tcfProximity=" + this.getTcfProximity() + ", temporalBias=" + this.getTemporalBias() + ", temporalBiasWeight=" + this.getTemporalBiasWeight() + ", exportDebugData=" + this.isExportDebugData() + ")";
}
public enum TemporalBias {
RECENT, OLD, NONE
}
public static class ResultRankingParametersBuilder {
private Bm25Parameters bm25Params;
private int shortDocumentThreshold;
private double shortDocumentPenalty;
private double domainRankBonus;
private double qualityPenalty;
private int shortSentenceThreshold;
private double shortSentencePenalty;
private double bm25Weight;
private double tcfFirstPosition;
private double tcfVerbatim;
private double tcfProximity;
private TemporalBias temporalBias;
private double temporalBiasWeight;
private boolean exportDebugData;
ResultRankingParametersBuilder() {
}
public ResultRankingParametersBuilder bm25Params(Bm25Parameters bm25Params) {
this.bm25Params = bm25Params;
return this;
}
public ResultRankingParametersBuilder shortDocumentThreshold(int shortDocumentThreshold) {
this.shortDocumentThreshold = shortDocumentThreshold;
return this;
}
public ResultRankingParametersBuilder shortDocumentPenalty(double shortDocumentPenalty) {
this.shortDocumentPenalty = shortDocumentPenalty;
return this;
}
public ResultRankingParametersBuilder domainRankBonus(double domainRankBonus) {
this.domainRankBonus = domainRankBonus;
return this;
}
public ResultRankingParametersBuilder qualityPenalty(double qualityPenalty) {
this.qualityPenalty = qualityPenalty;
return this;
}
public ResultRankingParametersBuilder shortSentenceThreshold(int shortSentenceThreshold) {
this.shortSentenceThreshold = shortSentenceThreshold;
return this;
}
public ResultRankingParametersBuilder shortSentencePenalty(double shortSentencePenalty) {
this.shortSentencePenalty = shortSentencePenalty;
return this;
}
public ResultRankingParametersBuilder bm25Weight(double bm25Weight) {
this.bm25Weight = bm25Weight;
return this;
}
public ResultRankingParametersBuilder tcfFirstPosition(double tcfFirstPosition) {
this.tcfFirstPosition = tcfFirstPosition;
return this;
}
public ResultRankingParametersBuilder tcfVerbatim(double tcfVerbatim) {
this.tcfVerbatim = tcfVerbatim;
return this;
}
public ResultRankingParametersBuilder tcfProximity(double tcfProximity) {
this.tcfProximity = tcfProximity;
return this;
}
public ResultRankingParametersBuilder temporalBias(TemporalBias temporalBias) {
this.temporalBias = temporalBias;
return this;
}
public ResultRankingParametersBuilder temporalBiasWeight(double temporalBiasWeight) {
this.temporalBiasWeight = temporalBiasWeight;
return this;
}
public ResultRankingParametersBuilder exportDebugData(boolean exportDebugData) {
this.exportDebugData = exportDebugData;
return this;
}
public ResultRankingParameters build() {
return new ResultRankingParameters(this.bm25Params, this.shortDocumentThreshold, this.shortDocumentPenalty, this.domainRankBonus, this.qualityPenalty, this.shortSentenceThreshold, this.shortSentencePenalty, this.bm25Weight, this.tcfFirstPosition, this.tcfVerbatim, this.tcfProximity, this.temporalBias, this.temporalBiasWeight, this.exportDebugData);
}
public String toString() {
return "ResultRankingParameters.ResultRankingParametersBuilder(bm25Params=" + this.bm25Params + ", shortDocumentThreshold=" + this.shortDocumentThreshold + ", shortDocumentPenalty=" + this.shortDocumentPenalty + ", domainRankBonus=" + this.domainRankBonus + ", qualityPenalty=" + this.qualityPenalty + ", shortSentenceThreshold=" + this.shortSentenceThreshold + ", shortSentencePenalty=" + this.shortSentencePenalty + ", bm25Weight=" + this.bm25Weight + ", tcfFirstPosition=" + this.tcfFirstPosition + ", tcfVerbatim=" + this.tcfVerbatim + ", tcfProximity=" + this.tcfProximity + ", temporalBias=" + this.temporalBias + ", temporalBiasWeight=" + this.temporalBiasWeight + ", exportDebugData=" + this.exportDebugData + ")";
}
}
}

View File

@@ -162,6 +162,7 @@ message RpcResultRankingParameters {
double temporalBiasWeight = 17;
bool exportDebugData = 18;
bool disablePenalties = 19;
}

View File

@@ -3,8 +3,6 @@ package nu.marginalia.index.client;
import nu.marginalia.api.searchquery.IndexProtobufCodec;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.SpecificationLimit;
import org.junit.jupiter.api.Test;
@@ -22,18 +20,6 @@ class IndexProtobufCodecTest {
verifyIsIdentityTransformation(SpecificationLimit.lessThan(1), l -> IndexProtobufCodec.convertSpecLimit(IndexProtobufCodec.convertSpecLimit(l)));
}
@Test
public void testRankingParameters() {
verifyIsIdentityTransformation(ResultRankingParameters.sensibleDefaults(),
p -> IndexProtobufCodec.convertRankingParameterss(IndexProtobufCodec.convertRankingParameterss(p, null)));
}
@Test
public void testQueryLimits() {
verifyIsIdentityTransformation(new QueryLimits(1,2,3,4),
l -> IndexProtobufCodec.convertQueryLimits(IndexProtobufCodec.convertQueryLimits(l))
);
}
@Test
public void testSubqery() {
verifyIsIdentityTransformation(new SearchQuery(

View File

@@ -2,8 +2,9 @@ package nu.marginalia.functions.searchquery;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.query.*;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.functions.searchquery.query_parser.QueryExpansion;
import nu.marginalia.functions.searchquery.query_parser.QueryParser;
import nu.marginalia.functions.searchquery.query_parser.token.QueryToken;
@@ -36,7 +37,7 @@ public class QueryFactory {
public ProcessedQuery createQuery(QueryParams params,
@Nullable ResultRankingParameters rankingParams) {
@Nullable RpcResultRankingParameters rankingParams) {
final var query = params.humanQuery();
if (query.length() > 1000) {
@@ -132,7 +133,9 @@ public class QueryFactory {
var limits = params.limits();
// Disable limits on number of results per domain if we're searching with a site:-type term
if (domain != null) {
limits = limits.forSingleDomain();
limits = RpcQueryLimits.newBuilder(limits)
.setResultsByDomain(limits.getResultsTotal())
.build();
}
var expansion = queryExpansion.expandQuery(queryBuilder.searchTermsInclude);

View File

@@ -9,7 +9,7 @@ import nu.marginalia.api.searchquery.*;
import nu.marginalia.api.searchquery.model.query.ProcessedQuery;
import nu.marginalia.api.searchquery.model.query.QueryParams;
import nu.marginalia.api.searchquery.model.results.DecoratedSearchResultItem;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.index.api.IndexClient;
import nu.marginalia.service.server.DiscoverableService;
import org.slf4j.Logger;
@@ -55,7 +55,7 @@ public class QueryGRPCService
.time(() -> {
var params = QueryProtobufCodec.convertRequest(request);
var query = queryFactory.createQuery(params, ResultRankingParameters.sensibleDefaults());
var query = queryFactory.createQuery(params, PrototypeRankingParameters.sensibleDefaults());
var indexRequest = QueryProtobufCodec.convertQuery(request, query);
@@ -102,7 +102,7 @@ public class QueryGRPCService
String originalQuery,
QueryParams params,
IndexClient.Pagination pagination,
ResultRankingParameters rankingParameters) {
RpcResultRankingParameters rankingParameters) {
var query = queryFactory.createQuery(params, rankingParameters);
IndexClient.AggregateQueryResponse response = indexClient.executeQueries(QueryProtobufCodec.convertQuery(originalQuery, query), pagination);

View File

@@ -233,9 +233,19 @@ public class QueryParser {
entity.replace(new QueryToken.RankTerm(limit, str));
} else if (str.startsWith("qs=")) {
entity.replace(new QueryToken.QsTerm(str.substring(3)));
} else if (str.contains(":")) {
} else if (str.startsWith("site:")
|| str.startsWith("format:")
|| str.startsWith("file:")
|| str.startsWith("tld:")
|| str.startsWith("ip:")
|| str.startsWith("as:")
|| str.startsWith("asn:")
|| str.startsWith("generator:")
)
{
entity.replace(new QueryToken.AdviceTerm(str, t.displayStr()));
}
}
private static SpecificationLimit parseSpecificationLimit(String str) {

View File

@@ -1,12 +1,12 @@
package nu.marginalia.query.svc;
import nu.marginalia.WmsaHome;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.RpcTemporalBias;
import nu.marginalia.api.searchquery.model.query.QueryParams;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.functions.searchquery.QueryFactory;
import nu.marginalia.functions.searchquery.query_parser.QueryExpansion;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.index.query.limit.SpecificationLimitType;
@@ -49,10 +49,15 @@ public class QueryFactoryTest {
SpecificationLimit.none(),
SpecificationLimit.none(),
null,
new QueryLimits(100, 100, 100, 100),
RpcQueryLimits.newBuilder()
.setResultsTotal(100)
.setResultsByDomain(100)
.setTimeoutMs(100)
.setFetchSize(100)
.build(),
"NONE",
QueryStrategy.AUTO,
ResultRankingParameters.TemporalBias.NONE,
RpcTemporalBias.Bias.NONE,
0), null).specs;
}
@@ -208,6 +213,12 @@ public class QueryFactoryTest {
System.out.println(subquery);
}
@Test
public void testCplusPlus() {
var subquery = parseAndGetSpecs("std::vector::push_back vector");
System.out.println(subquery);
}
@Test
public void testQuotedApostrophe() {
var subquery = parseAndGetSpecs("\"bob's cars\"");

View File

@@ -16,20 +16,19 @@ import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import static java.lang.Math.clamp;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;
@Singleton
public class IndexClient {
private static final Logger logger = LoggerFactory.getLogger(IndexClient.class);
private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool;
private final DomainBlacklistImpl blacklist;
private static final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
private static final ExecutorService executor = Executors.newCachedThreadPool();
@Inject
public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) {
@@ -51,40 +50,37 @@ public class IndexClient {
/** Execute a query on the index partitions and return the combined results. */
public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) {
List<CompletableFuture<Iterator<RpcDecoratedResultItem>>> futures =
channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
.async(executor)
.runEach(indexRequest);
final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal();
final int resultsUpperBound = requestedMaxResults * channelPool.getNumNodes();
List<RpcDecoratedResultItem> results = new ArrayList<>(resultsUpperBound);
AtomicInteger totalNumResults = new AtomicInteger(0);
for (var future : futures) {
List<RpcDecoratedResultItem> results =
channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
.async(executor)
.runEach(indexRequest)
.stream()
.map(future -> future.thenApply(iterator -> {
List<RpcDecoratedResultItem> ret = new ArrayList<>(requestedMaxResults);
iterator.forEachRemaining(ret::add);
totalNumResults.addAndGet(ret.size());
return ret;
}))
.mapMulti((CompletableFuture<List<RpcDecoratedResultItem>> fut, Consumer<List<RpcDecoratedResultItem>> c) ->{
try {
future.get().forEachRemaining(results::add);
}
catch (Exception e) {
logger.error("Downstream exception", e);
}
c.accept(fut.join());
} catch (Exception e) {
logger.error("Error while fetching results", e);
}
})
.flatMap(List::stream)
.filter(item -> !isBlacklisted(item))
.sorted(comparator)
.skip(Math.max(0, (pagination.page - 1) * pagination.pageSize))
.limit(pagination.pageSize)
.toList();
// Sort the results by ranking score and remove blacklisted domains
results.sort(comparator);
results.removeIf(this::isBlacklisted);
int numReceivedResults = results.size();
// pagination is typically 1-indexed, so we need to adjust the start and end indices
int indexStart = (pagination.page - 1) * pagination.pageSize;
int indexEnd = (pagination.page) * pagination.pageSize;
results = results.subList(
clamp(indexStart, 0, Math.max(0, results.size() - 1)), // from is inclusive, so subtract 1 from size()
clamp(indexEnd, 0, results.size()));
return new AggregateQueryResponse(results, pagination.page(), numReceivedResults);
return new AggregateQueryResponse(results, pagination.page(), totalNumResults.get());
}
private boolean isBlacklisted(RpcDecoratedResultItem item) {

View File

@@ -10,12 +10,12 @@ import it.unimi.dsi.fastutil.longs.LongArrayList;
import nu.marginalia.api.searchquery.IndexApiGrpc;
import nu.marginalia.api.searchquery.RpcDecoratedResultItem;
import nu.marginalia.api.searchquery.RpcIndexQuery;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.compiled.CompiledQuery;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.array.page.LongQueryBuffer;
import nu.marginalia.index.index.StatefulIndex;
import nu.marginalia.index.model.SearchParameters;
@@ -211,7 +211,7 @@ public class IndexGrpcService
/** This class is responsible for ranking the results and adding the best results to the
* resultHeap, which depending on the state of the indexLookup threads may or may not block
*/
private ResultRankingContext createRankingContext(ResultRankingParameters rankingParams,
private ResultRankingContext createRankingContext(RpcResultRankingParameters rankingParams,
CompiledQuery<String> compiledQuery,
CompiledQueryLong compiledQueryIds)
{

View File

@@ -2,12 +2,13 @@ package nu.marginalia.index.model;
import nu.marginalia.api.searchquery.IndexProtobufCodec;
import nu.marginalia.api.searchquery.RpcIndexQuery;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.model.compiled.CompiledQuery;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryParser;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.index.query.IndexSearchBudget;
import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.searchset.SearchSet;
@@ -23,7 +24,7 @@ public class SearchParameters {
public final IndexSearchBudget budget;
public final SearchQuery query;
public final QueryParams queryParams;
public final ResultRankingParameters rankingParams;
public final RpcResultRankingParameters rankingParams;
public final int limitByDomain;
public final int limitTotal;
@@ -41,11 +42,11 @@ public class SearchParameters {
public SearchParameters(SearchSpecification specsSet, SearchSet searchSet) {
var limits = specsSet.queryLimits;
this.fetchSize = limits.fetchSize();
this.budget = new IndexSearchBudget(limits.timeoutMs());
this.fetchSize = limits.getFetchSize();
this.budget = new IndexSearchBudget(limits.getTimeoutMs());
this.query = specsSet.query;
this.limitByDomain = limits.resultsByDomain();
this.limitTotal = limits.resultsTotal();
this.limitByDomain = limits.getResultsByDomain();
this.limitTotal = limits.getResultsTotal();
queryParams = new QueryParams(
specsSet.quality,
@@ -62,17 +63,17 @@ public class SearchParameters {
}
public SearchParameters(RpcIndexQuery request, SearchSet searchSet) {
var limits = IndexProtobufCodec.convertQueryLimits(request.getQueryLimits());
var limits = request.getQueryLimits();
this.fetchSize = limits.fetchSize();
this.fetchSize = limits.getFetchSize();
// The time budget is halved because this is the point when we start to
// wrap up the search and return the results.
this.budget = new IndexSearchBudget(limits.timeoutMs() / 2);
this.budget = new IndexSearchBudget(limits.getTimeoutMs() / 2);
this.query = IndexProtobufCodec.convertRpcQuery(request.getQuery());
this.limitByDomain = limits.resultsByDomain();
this.limitTotal = limits.resultsTotal();
this.limitByDomain = limits.getResultsByDomain();
this.limitTotal = limits.getResultsTotal();
queryParams = new QueryParams(
convertSpecLimit(request.getQuality()),
@@ -85,7 +86,7 @@ public class SearchParameters {
compiledQuery = CompiledQueryParser.parse(this.query.compiledQuery);
compiledQueryIds = compiledQuery.mapToLong(SearchTermsUtil::getWordId);
rankingParams = IndexProtobufCodec.convertRankingParameterss(request.getParameters());
rankingParams = request.hasParameters() ? request.getParameters() : PrototypeRankingParameters.sensibleDefaults();
}

View File

@@ -2,7 +2,6 @@ package nu.marginalia.index.results;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import nu.marginalia.api.searchquery.model.compiled.CqExpression;
import nu.marginalia.api.searchquery.model.results.Bm25Parameters;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import java.util.BitSet;
@@ -24,14 +23,14 @@ public class Bm25GraphVisitor implements CqExpression.DoubleVisitor {
private final BitSet mask;
public Bm25GraphVisitor(Bm25Parameters bm25Parameters,
public Bm25GraphVisitor(double k1, double b,
float[] counts,
int length,
ResultRankingContext ctx) {
this.length = length;
this.k1 = bm25Parameters.k();
this.b = bm25Parameters.b();
this.k1 = k1;
this.b = b;
this.docCount = ctx.termFreqDocCount();
this.counts = counts;

View File

@@ -0,0 +1,119 @@
package nu.marginalia.index.results;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import gnu.trove.map.hash.TIntDoubleHashMap;
import nu.marginalia.WmsaHome;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import java.util.OptionalInt;
import java.util.concurrent.TimeUnit;
@Singleton
public class DomainRankingOverrides {
private final DbDomainQueries domainQueries;
private volatile TIntDoubleHashMap rankingFactors = new TIntDoubleHashMap(100, 0.75f, -1, 1.);
private static final Logger logger = LoggerFactory.getLogger(DomainRankingOverrides.class);
private final Path overrideFilePath;
@Inject
public DomainRankingOverrides(DbDomainQueries domainQueries) {
this.domainQueries = domainQueries;
overrideFilePath = WmsaHome.getDataPath().resolve("domain-ranking-factors.txt");
Thread.ofPlatform().start(this::updateRunner);
}
// for test access
public DomainRankingOverrides(DbDomainQueries domainQueries, Path overrideFilePath)
{
this.domainQueries = domainQueries;
this.overrideFilePath = overrideFilePath;
}
public double getRankingFactor(int domainId) {
return rankingFactors.get(domainId);
}
private void updateRunner() {
for (;;) {
reloadFile();
try {
TimeUnit.MINUTES.sleep(5);
} catch (InterruptedException ex) {
logger.warn("Thread interrupted", ex);
break;
}
}
}
void reloadFile() {
if (!Files.exists(overrideFilePath)) {
return;
}
try {
List<String> lines = Files.readAllLines(overrideFilePath);
double factor = 1.;
var newRankingFactors = new TIntDoubleHashMap(lines.size(), 0.75f, -1, 1.);
for (var line : lines) {
if (line.isBlank()) continue;
if (line.startsWith("#")) continue;
String[] parts = line.split("\\s+");
if (parts.length != 2) {
logger.warn("Unrecognized format for domain overrides file: {}", line);
continue;
}
try {
switch (parts[0]) {
case "value" -> {
// error handle me
factor = Double.parseDouble(parts[1]);
if (factor < 0) {
logger.error("Negative values are not permitted, found {}", factor);
factor = 1;
}
}
case "domain" -> {
// error handle
OptionalInt domainId = domainQueries.tryGetDomainId(new EdgeDomain(parts[1]));
if (domainId.isPresent()) {
newRankingFactors.put(domainId.getAsInt(), factor);
}
else {
logger.warn("Unrecognized domain id {}", parts[1]);
}
}
default -> {
logger.warn("Unrecognized format {}", line);
}
}
} catch (Exception ex) {
logger.warn("Error in parsing domain overrides file: {} ({})", line, ex.getClass().getSimpleName());
}
}
rankingFactors = newRankingFactors;
} catch (IOException ex) {
logger.error("Failed to read " + overrideFilePath, ex);
}
}
}

View File

@@ -40,13 +40,16 @@ public class IndexResultRankingService {
private final DocumentDbReader documentDbReader;
private final StatefulIndex statefulIndex;
private final DomainRankingOverrides domainRankingOverrides;
@Inject
public IndexResultRankingService(DocumentDbReader documentDbReader,
StatefulIndex statefulIndex)
StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides)
{
this.documentDbReader = documentDbReader;
this.statefulIndex = statefulIndex;
this.domainRankingOverrides = domainRankingOverrides;
}
public List<SearchResultItem> rankResults(SearchParameters params,
@@ -57,7 +60,7 @@ public class IndexResultRankingService {
if (resultIds.isEmpty())
return List.of();
IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, rankingContext, params);
IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, domainRankingOverrides, rankingContext, params);
List<SearchResultItem> results = new ArrayList<>(resultIds.size());
@@ -156,7 +159,7 @@ public class IndexResultRankingService {
// for the selected results, as this would be comically expensive to do for all the results we
// discard along the way
if (params.rankingParams.exportDebugData) {
if (params.rankingParams.getExportDebugData()) {
var combinedIdsList = new LongArrayList(resultsList.size());
for (var item : resultsList) {
combinedIdsList.add(item.combinedId);

View File

@@ -2,10 +2,11 @@ package nu.marginalia.index.results;
import it.unimi.dsi.fastutil.ints.IntIterator;
import it.unimi.dsi.fastutil.ints.IntList;
import nu.marginalia.api.searchquery.RpcResultRankingParameters;
import nu.marginalia.api.searchquery.RpcTemporalBias;
import nu.marginalia.api.searchquery.model.compiled.CompiledQuery;
import nu.marginalia.api.searchquery.model.compiled.CompiledQueryLong;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.results.SearchResultItem;
import nu.marginalia.api.searchquery.model.results.debug.DebugRankingFactors;
import nu.marginalia.index.forward.spans.DocumentSpans;
@@ -40,14 +41,17 @@ public class IndexResultScoreCalculator {
private final CombinedIndexReader index;
private final QueryParams queryParams;
private final DomainRankingOverrides domainRankingOverrides;
private final ResultRankingContext rankingContext;
private final CompiledQuery<String> compiledQuery;
public IndexResultScoreCalculator(StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides,
ResultRankingContext rankingContext,
SearchParameters params)
{
this.index = statefulIndex.get();
this.domainRankingOverrides = domainRankingOverrides;
this.rankingContext = rankingContext;
this.queryParams = params.queryParams;
@@ -116,20 +120,20 @@ public class IndexResultScoreCalculator {
float proximitiyFac = getProximitiyFac(decodedPositions, searchTerms.phraseConstraints, verbatimMatches, unorderedMatches, spans);
double score_firstPosition = params.tcfFirstPosition * (1.0 / Math.sqrt(unorderedMatches.firstPosition));
double score_verbatim = params.tcfVerbatim * verbatimMatches.getScore();
double score_proximity = params.tcfProximity * proximitiyFac;
double score_bM25 = params.bm25Weight
* wordFlagsQuery.root.visit(new Bm25GraphVisitor(params.bm25Params, unorderedMatches.getWeightedCounts(), docSize, rankingContext))
double score_firstPosition = params.getTcfFirstPositionWeight() * (1.0 / Math.sqrt(unorderedMatches.firstPosition));
double score_verbatim = params.getTcfVerbatimWeight() * verbatimMatches.getScore();
double score_proximity = params.getTcfProximityWeight() * proximitiyFac;
double score_bM25 = params.getBm25Weight()
* wordFlagsQuery.root.visit(new Bm25GraphVisitor(params.getBm25K(), params.getBm25B(), unorderedMatches.getWeightedCounts(), docSize, rankingContext))
/ (Math.sqrt(unorderedMatches.searchableKeywordCount + 1));
double score_bFlags = params.bm25Weight
* wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.bm25Params, wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext))
double score_bFlags = params.getBm25Weight()
* wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.getBm25K(), wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext))
/ (Math.sqrt(unorderedMatches.searchableKeywordCount + 1));
double rankingAdjustment = domainRankingOverrides.getRankingFactor(UrlIdCodec.getDomainId(combinedId));
double score = normalize(
score_firstPosition + score_proximity + score_verbatim
+ score_bM25
+ score_bFlags,
rankingAdjustment * (score_firstPosition + score_proximity + score_verbatim + score_bM25 + score_bFlags),
-Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0
);
@@ -245,9 +249,13 @@ public class IndexResultScoreCalculator {
private double calculateDocumentBonus(long documentMetadata,
int features,
int length,
ResultRankingParameters rankingParams,
RpcResultRankingParameters rankingParams,
@Nullable DebugRankingFactors debugRankingFactors) {
if (rankingParams.getDisablePenalties()) {
return 0.;
}
int rank = DocumentMetadata.decodeRank(documentMetadata);
int asl = DocumentMetadata.decodeAvgSentenceLength(documentMetadata);
int quality = DocumentMetadata.decodeQuality(documentMetadata);
@@ -256,18 +264,18 @@ public class IndexResultScoreCalculator {
int topology = DocumentMetadata.decodeTopology(documentMetadata);
int year = DocumentMetadata.decodeYear(documentMetadata);
double averageSentenceLengthPenalty = (asl >= rankingParams.shortSentenceThreshold ? 0 : -rankingParams.shortSentencePenalty);
double averageSentenceLengthPenalty = (asl >= rankingParams.getShortSentenceThreshold() ? 0 : -rankingParams.getShortSentencePenalty());
final double qualityPenalty = calculateQualityPenalty(size, quality, rankingParams);
final double rankingBonus = (255. - rank) * rankingParams.domainRankBonus;
final double rankingBonus = (255. - rank) * rankingParams.getDomainRankBonus();
final double topologyBonus = Math.log(1 + topology);
final double documentLengthPenalty = length > rankingParams.shortDocumentThreshold ? 0 : -rankingParams.shortDocumentPenalty;
final double documentLengthPenalty = length > rankingParams.getShortDocumentThreshold() ? 0 : -rankingParams.getShortDocumentPenalty();
final double temporalBias;
if (rankingParams.temporalBias == ResultRankingParameters.TemporalBias.RECENT) {
temporalBias = - Math.abs(year - PubDate.MAX_YEAR) * rankingParams.temporalBiasWeight;
} else if (rankingParams.temporalBias == ResultRankingParameters.TemporalBias.OLD) {
temporalBias = - Math.abs(year - PubDate.MIN_YEAR) * rankingParams.temporalBiasWeight;
if (rankingParams.getTemporalBias().getBias() == RpcTemporalBias.Bias.RECENT) {
temporalBias = - Math.abs(year - PubDate.MAX_YEAR) * rankingParams.getTemporalBiasWeight();
} else if (rankingParams.getTemporalBias().getBias() == RpcTemporalBias.Bias.OLD) {
temporalBias = - Math.abs(year - PubDate.MIN_YEAR) * rankingParams.getTemporalBiasWeight();
} else {
temporalBias = 0;
}
@@ -506,14 +514,14 @@ public class IndexResultScoreCalculator {
}
private double calculateQualityPenalty(int size, int quality, ResultRankingParameters rankingParams) {
private double calculateQualityPenalty(int size, int quality, RpcResultRankingParameters rankingParams) {
if (size < 400) {
if (quality < 5)
return 0;
return -quality * rankingParams.qualityPenalty;
return -quality * rankingParams.getQualityPenalty();
}
else {
return -quality * rankingParams.qualityPenalty * 20;
return -quality * rankingParams.getQualityPenalty() * 20;
}
}
@@ -575,3 +583,4 @@ public class IndexResultScoreCalculator {
}
}

View File

@@ -3,7 +3,6 @@ package nu.marginalia.index.results;
import nu.marginalia.api.searchquery.model.compiled.CqDataInt;
import nu.marginalia.api.searchquery.model.compiled.CqDataLong;
import nu.marginalia.api.searchquery.model.compiled.CqExpression;
import nu.marginalia.api.searchquery.model.results.Bm25Parameters;
import nu.marginalia.api.searchquery.model.results.ResultRankingContext;
import nu.marginalia.model.idx.WordFlags;
@@ -15,15 +14,14 @@ public class TermFlagsGraphVisitor implements CqExpression.DoubleVisitor {
private final CqDataLong wordMetaData;
private final CqDataInt frequencies;
private final float[] counts;
private final Bm25Parameters bm25Parameters;
private final double k1;
private final int docCount;
public TermFlagsGraphVisitor(Bm25Parameters bm25Parameters,
public TermFlagsGraphVisitor(double k1,
CqDataLong wordMetaData,
float[] counts,
ResultRankingContext ctx) {
this.bm25Parameters = bm25Parameters;
this.k1 = k1;
this.counts = counts;
this.docCount = ctx.termFreqDocCount();
this.wordMetaData = wordMetaData;
@@ -55,7 +53,7 @@ public class TermFlagsGraphVisitor implements CqExpression.DoubleVisitor {
int freq = frequencies.get(idx);
// note we override b to zero for priority terms as they are independent of document length
return invFreq(docCount, freq) * f(bm25Parameters.k(), 0, count, 0);
return invFreq(docCount, freq) * f(k1, 0, count, 0);
}
private double evaluatePriorityScore(int idx) {

View File

@@ -1,7 +0,0 @@
package nu.marginalia.index.query.limit;
public record QueryLimits(int resultsByDomain, int resultsTotal, int timeoutMs, int fetchSize) {
public QueryLimits forSingleDomain() {
return new QueryLimits(resultsTotal, resultsTotal, timeoutMs, fetchSize);
}
}

View File

@@ -4,10 +4,11 @@ import com.google.inject.Guice;
import com.google.inject.Inject;
import nu.marginalia.IndexLocations;
import nu.marginalia.api.searchquery.RpcDecoratedResultItem;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.index.construction.DocIdRewriter;
import nu.marginalia.index.construction.full.FullIndexConstructor;
import nu.marginalia.index.construction.prio.PrioIndexConstructor;
@@ -17,7 +18,6 @@ import nu.marginalia.index.forward.construction.ForwardIndexConverter;
import nu.marginalia.index.index.StatefulIndex;
import nu.marginalia.index.journal.IndexJournal;
import nu.marginalia.index.journal.IndexJournalSlopWriter;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.linkdb.docs.DocumentDbReader;
@@ -115,9 +115,16 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery(
SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000))
.queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.queryStrategy(QueryStrategy.SENTENCE)
.rankingParams(ResultRankingParameters.sensibleDefaults())
.rankingParams(PrototypeRankingParameters.sensibleDefaults())
.domains(new ArrayList<>())
.searchSetIdentifier("NONE")
.query(
@@ -171,9 +178,16 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery(
SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000))
.queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.queryStrategy(QueryStrategy.SENTENCE)
.rankingParams(ResultRankingParameters.sensibleDefaults())
.rankingParams(PrototypeRankingParameters.sensibleDefaults())
.domains(new ArrayList<>())
.searchSetIdentifier("NONE")
.query(
@@ -225,8 +239,15 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery(
SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000))
.rankingParams(ResultRankingParameters.sensibleDefaults())
.queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.rankingParams(PrototypeRankingParameters.sensibleDefaults())
.queryStrategy(QueryStrategy.SENTENCE)
.domains(List.of(2))
.query(
@@ -282,11 +303,18 @@ public class IndexQueryServiceIntegrationSmokeTest {
var rsp = queryService.justQuery(
SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000))
.queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.year(SpecificationLimit.equals(1998))
.queryStrategy(QueryStrategy.SENTENCE)
.searchSetIdentifier("NONE")
.rankingParams(ResultRankingParameters.sensibleDefaults())
.rankingParams(PrototypeRankingParameters.sensibleDefaults())
.query(
SearchQuery.builder()
.compiledQuery("4")

View File

@@ -4,10 +4,11 @@ import com.google.inject.Guice;
import com.google.inject.Inject;
import it.unimi.dsi.fastutil.ints.IntList;
import nu.marginalia.IndexLocations;
import nu.marginalia.api.searchquery.RpcQueryLimits;
import nu.marginalia.api.searchquery.model.query.SearchPhraseConstraint;
import nu.marginalia.api.searchquery.model.query.SearchQuery;
import nu.marginalia.api.searchquery.model.query.SearchSpecification;
import nu.marginalia.api.searchquery.model.results.ResultRankingParameters;
import nu.marginalia.api.searchquery.model.results.PrototypeRankingParameters;
import nu.marginalia.hash.MurmurHash3_128;
import nu.marginalia.index.construction.DocIdRewriter;
import nu.marginalia.index.construction.full.FullIndexConstructor;
@@ -18,7 +19,6 @@ import nu.marginalia.index.forward.construction.ForwardIndexConverter;
import nu.marginalia.index.index.StatefulIndex;
import nu.marginalia.index.journal.IndexJournal;
import nu.marginalia.index.journal.IndexJournalSlopWriter;
import nu.marginalia.index.query.limit.QueryLimits;
import nu.marginalia.index.query.limit.QueryStrategy;
import nu.marginalia.index.query.limit.SpecificationLimit;
import nu.marginalia.linkdb.docs.DocumentDbReader;
@@ -389,13 +389,20 @@ public class IndexQueryServiceIntegrationTest {
SearchSpecification basicQuery(Function<SearchSpecification.SearchSpecificationBuilder, SearchSpecification.SearchSpecificationBuilder> mutator)
{
var builder = SearchSpecification.builder()
.queryLimits(new QueryLimits(10, 10, Integer.MAX_VALUE, 4000))
.queryLimits(
RpcQueryLimits.newBuilder()
.setResultsByDomain(10)
.setResultsTotal(10)
.setTimeoutMs(Integer.MAX_VALUE)
.setFetchSize(4000)
.build()
)
.queryStrategy(QueryStrategy.SENTENCE)
.year(SpecificationLimit.none())
.quality(SpecificationLimit.none())
.size(SpecificationLimit.none())
.rank(SpecificationLimit.none())
.rankingParams(ResultRankingParameters.sensibleDefaults())
.rankingParams(PrototypeRankingParameters.sensibleDefaults())
.domains(new ArrayList<>())
.searchSetIdentifier("NONE");

View File

@@ -0,0 +1,103 @@
package nu.marginalia.index.results;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.test.TestMigrationLoader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.parallel.Execution;
import org.junit.jupiter.api.parallel.ExecutionMode;
import org.testcontainers.containers.MariaDBContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.sql.SQLException;
@Testcontainers
@Execution(ExecutionMode.SAME_THREAD)
@Tag("slow")
class DomainRankingOverridesTest {
@Container
static MariaDBContainer<?> mariaDBContainer = new MariaDBContainer<>("mariadb")
.withDatabaseName("WMSA_prod")
.withUsername("wmsa")
.withPassword("wmsa")
.withNetworkAliases("mariadb");
private static DbDomainQueries domainQueries;
@BeforeAll
public static void setup() throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(mariaDBContainer.getJdbcUrl());
config.setUsername("wmsa");
config.setPassword("wmsa");
var dataSource = new HikariDataSource(config);
TestMigrationLoader.flywayMigration(dataSource);
try (var conn = dataSource.getConnection();
var stmt = conn.createStatement()) {
stmt.executeQuery("DELETE FROM EC_DOMAIN"); // Wipe any old state from other test runs
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('first.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('second.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('third.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('not-added.example.com', 'example.com', 1)");
}
domainQueries = new DbDomainQueries(dataSource);
}
@Test
public void test() throws IOException {
Path overridesFile = Files.createTempFile(getClass().getSimpleName(), ".txt");
try {
Files.writeString(overridesFile, """
# A comment
value 0.75
domain first.example.com
domain second.example.com
value 1.1
domain third.example.com
""",
StandardOpenOption.APPEND);
var overrides = new DomainRankingOverrides(domainQueries, overridesFile);
overrides.reloadFile();
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("first.example.com"))
));
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("second.example.com"))
));
Assertions.assertEquals(1.1, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("third.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("not-added.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(1<<23));
}
finally {
Files.deleteIfExists(overridesFile);
}
}
}

View File

@@ -45,6 +45,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
);
}
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader(
dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
dataReader.skip(toSkip);
}
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override
public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
);
}
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader(
Storage.reader(uri, this, page, false),
@@ -96,6 +101,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
this.indexReader = indexReader;
}
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override
public AbstractColumn<?, ?> columnDesc() {
return GammaCodedSequenceColumn.this;

View File

@@ -45,6 +45,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
);
}
@Override
public int alignmentSize() {
return 0;
}
public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader(
dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
dataReader.skip(toSkip);
}
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override
public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
);
}
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader(
Storage.reader(uri, this, page, false),
@@ -101,6 +106,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
return VarintCodedSequenceColumn.this;
}
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override
public void skip(long positions) throws IOException {
for (int i = 0; i < positions; i++) {

View File

@@ -155,8 +155,15 @@ public class SentenceExtractor {
public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) {
String[] sentences;
// Normalize spaces
// Safety net against malformed data DOS attacks,
// found 5+ MB <p>-tags in the wild that just break
// the sentence extractor causing it to stall forever.
if (text.length() > 50_000) {
// 50k chars can hold a small novel, let alone single html tags
text = text.substring(0, 50_000);
}
// Normalize spaces
text = normalizeSpaces(text);
// Split into sentences

View File

@@ -152,7 +152,10 @@ public class DocumentPositionMapper {
}
boolean matchesWordPattern(String s) {
// this function is an unrolled version of the regexp [\da-zA-Z]{1,15}([.\-_/:+*][\da-zA-Z]{1,10}){0,4}
if (s.length() > 48)
return false;
// this function is an unrolled version of the regexp [\da-zA-Z]{1,15}([.\-_/:+*][\da-zA-Z]{1,10}){0,8}
String wordPartSeparator = ".-_/:+*";
@@ -169,7 +172,7 @@ public class DocumentPositionMapper {
if (i == 0)
return false;
for (int j = 0; j < 5; j++) {
for (int j = 0; j < 8; j++) {
if (i == s.length()) return true;
if (wordPartSeparator.indexOf(s.charAt(i)) < 0) {

View File

@@ -30,9 +30,11 @@ class DocumentPositionMapperTest {
Assertions.assertFalse(positionMapper.matchesWordPattern("1234567890abcdef"));
Assertions.assertTrue(positionMapper.matchesWordPattern("test-test-test-test-test"));
Assertions.assertFalse(positionMapper.matchesWordPattern("test-test-test-test-test-test"));
Assertions.assertFalse(positionMapper.matchesWordPattern("test-test-test-test-test-test-test-test-test"));
Assertions.assertTrue(positionMapper.matchesWordPattern("192.168.1.100/24"));
Assertions.assertTrue(positionMapper.matchesWordPattern("std::vector"));
Assertions.assertTrue(positionMapper.matchesWordPattern("std::vector::push_back"));
Assertions.assertTrue(positionMapper.matchesWordPattern("c++"));
Assertions.assertTrue(positionMapper.matchesWordPattern("m*a*s*h"));
Assertions.assertFalse(positionMapper.matchesWordPattern("Stulpnagelstrasse"));

View File

@@ -12,7 +12,6 @@ import nu.marginalia.converting.sideload.SideloadSourceFactory;
import nu.marginalia.converting.writer.ConverterBatchWritableIf;
import nu.marginalia.converting.writer.ConverterBatchWriter;
import nu.marginalia.converting.writer.ConverterWriter;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.mqapi.converting.ConvertRequest;
@@ -51,6 +50,7 @@ public class ConverterMain extends ProcessMainClass {
private final ProcessHeartbeat heartbeat;
private final FileStorageService fileStorageService;
private final SideloadSourceFactory sideloadSourceFactory;
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
public static void main(String... args) throws Exception {
@@ -201,12 +201,20 @@ public class ConverterMain extends ProcessMainClass {
processedDomains.set(batchingWorkLog.size());
heartbeat.setProgress(processedDomains.get() / (double) totalDomains);
for (var domain : WorkLog.iterableMap(crawlDir.getLogFile(),
logger.info("Processing small items");
int numBigTasks = 0;
// First process the small items
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{
if (SerializableCrawlDataStream.getSizeHint(dataPath) >= SIDELOAD_THRESHOLD) {
numBigTasks ++;
continue;
}
pool.submit(() -> {
try {
ConverterBatchWritableIf writable = processor.createWritable(domain);
try (var dataStream = SerializableCrawlDataStream.openDataStream(dataPath)) {
ConverterBatchWritableIf writable = processor.fullProcessing(dataStream) ;
converterWriter.accept(writable);
}
catch (Exception ex) {
@@ -225,10 +233,46 @@ public class ConverterMain extends ProcessMainClass {
do {
System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining");
} while (!pool.awaitTermination(60, TimeUnit.SECONDS));
logger.info("Processing large items");
try (var hb = heartbeat.createAdHocTaskHeartbeat("Large Domains")) {
int bigTaskIdx = 0;
// Next the big items domain-by-domain
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{
int sizeHint = SerializableCrawlDataStream.getSizeHint(dataPath);
if (sizeHint < SIDELOAD_THRESHOLD) {
continue;
}
hb.progress(dataPath.toFile().getName(), bigTaskIdx++, numBigTasks);
try {
// SerializableCrawlDataStream is autocloseable, we can't try-with-resources because then it will be
// closed before it's consumed by the converterWriter. Instead, the converterWriter guarantees it
// will close it after it's consumed.
var stream = SerializableCrawlDataStream.openDataStream(dataPath);
ConverterBatchWritableIf writable = processor.simpleProcessing(stream, sizeHint);
converterWriter.accept(writable);
}
catch (Exception ex) {
logger.info("Error in processing", ex);
}
finally {
heartbeat.setProgress(processedDomains.incrementAndGet() / (double) totalDomains);
}
}
}
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<SerializableCrawlDataStream>> {
logger.info("Processing complete");
}
}
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Path>> {
private final Path crawlRootDir;
private final BatchingWorkLog batchingWorkLog;
@@ -239,7 +283,7 @@ public class ConverterMain extends ProcessMainClass {
}
@Override
public Optional<SerializableCrawlDataStream> apply(WorkLogEntry entry) {
public Optional<Path> apply(WorkLogEntry entry) {
if (batchingWorkLog.isItemProcessed(entry.id())) {
return Optional.empty();
}
@@ -252,7 +296,7 @@ public class ConverterMain extends ProcessMainClass {
}
try {
return Optional.of(CrawledDomainReader.createDataStream(path));
return Optional.of(path);
}
catch (Exception ex) {
return Optional.empty();

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.WordFlags;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException;
import java.util.ArrayList;
import java.util.List;
@@ -91,7 +92,7 @@ public class DocumentProcessor {
DocumentClass documentClass,
DocumentDecorator documentDecorator,
DomainLinks externalDomainLinks,
ProcessedDocument ret) throws URISyntaxException, DisqualifiedException
ProcessedDocument ret) throws URISyntaxException, IOException, DisqualifiedException
{
var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus);
@@ -109,7 +110,7 @@ public class DocumentProcessor {
ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus);
final var plugin = findPlugin(crawledDocument);
AbstractDocumentProcessorPlugin plugin = findPlugin(crawledDocument);
EdgeUrl url = new EdgeUrl(crawledDocument.url);
LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url);

View File

@@ -32,7 +32,6 @@ import java.util.*;
import java.util.regex.Pattern;
public class DomainProcessor {
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
private final DocumentProcessor documentProcessor;
private final SiteWords siteWords;
private final AnchorTagsSource anchorTagsSource;
@@ -54,21 +53,9 @@ public class DomainProcessor {
geoIpDictionary.waitReady();
}
public ConverterBatchWritableIf createWritable(SerializableCrawlDataStream domain) {
final int sizeHint = domain.sizeHint();
if (sizeHint > SIDELOAD_THRESHOLD) {
// If the file is too big, we run a processing mode that doesn't
// require loading the entire dataset into RAM
return sideloadProcessing(domain, sizeHint);
}
return fullProcessing(domain);
}
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
try {
return new SideloadProcessing(dataStream, sizeHint, extraKeywords);
return new SimpleProcessing(dataStream, sizeHint, extraKeywords);
}
catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex);
@@ -76,9 +63,9 @@ public class DomainProcessor {
}
}
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) {
public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) {
try {
return new SideloadProcessing(dataStream, sizeHint);
return new SimpleProcessing(dataStream, sizeHint);
}
catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex);
@@ -86,22 +73,84 @@ public class DomainProcessor {
}
}
public class SideloadProcessing implements ConverterBatchWritableIf, SideloadSource {
@Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBodyBytes.length == 0)
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
/** The simple processing track processes documents individually, and does not perform any domain-level analysis.
* This is needed to process extremely large domains, which would otherwise eat up too much RAM.
*/
public class SimpleProcessing implements ConverterBatchWritableIf, SideloadSource {
private final SerializableCrawlDataStream dataStream;
private final ProcessedDomain domain;
private final DocumentDecorator documentDecorator;
private final Set<String> processedUrls = new HashSet<>();
private final DomainLinks externalDomainLinks;
private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator();
private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8,
Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors())
);
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException {
SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException {
this(dataStream, sizeHint, List.of());
}
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException {
SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException {
this.dataStream = dataStream;
if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain))
@@ -128,6 +177,7 @@ public class DomainProcessor {
@Override
public Iterator<ProcessedDocument> getDocumentsStream() {
return iteratorFactory.create((taskConsumer) -> {
while (dataStream.hasNext())
{
if (!(dataStream.next() instanceof CrawledDocument doc))
@@ -172,65 +222,6 @@ public class DomainProcessor {
}
}
@Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBody.isBlank())
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
private void processDomain(CrawledDomain crawledDomain,
ProcessedDomain domain,
DocumentDecorator decorator)

View File

@@ -24,7 +24,7 @@ public class DocumentValuator {
double scriptPenalty = getScriptPenalty(parsedDocument);
double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument);
int rawLength = crawledDocument.documentBody.length();
int rawLength = crawledDocument.documentBodyBytes.length;
if (textLength == 0) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH);

View File

@@ -218,7 +218,10 @@ public class FeatureExtractor {
}
}
if (features.contains(HtmlFeature.JS) && adblockSimulator.hasAds(doc.clone())) {
if (features.contains(HtmlFeature.JS)
// remove while disabled to get rid of expensive clone() call:
// adblockSimulator.hasAds(doc.clone())
) {
features.add(HtmlFeature.ADVERTISEMENT);
}

View File

@@ -14,6 +14,7 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard;
import javax.annotation.Nullable;
import java.io.IOException;
import java.net.URISyntaxException;
import java.util.HashSet;
import java.util.List;
@@ -25,7 +26,7 @@ public abstract class AbstractDocumentProcessorPlugin {
this.languageFilter = languageFilter;
}
public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException;
public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException, IOException;
public abstract boolean isApplicable(CrawledDocument doc);
protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException {
@@ -86,6 +87,7 @@ public abstract class AbstractDocumentProcessorPlugin {
return this;
}
public MetaTagsBuilder addPubDate(PubDate pubDate) {
if (pubDate.year() > 1900) {

View File

@@ -6,6 +6,7 @@ import nu.marginalia.converting.model.DisqualifiedException;
import nu.marginalia.converting.model.DocumentHeaders;
import nu.marginalia.converting.model.GeneratorType;
import nu.marginalia.converting.model.ProcessedDocumentDetails;
import nu.marginalia.converting.processor.AcceptableAds;
import nu.marginalia.converting.processor.DocumentClass;
import nu.marginalia.converting.processor.MetaRobotsTag;
import nu.marginalia.converting.processor.logic.*;
@@ -32,11 +33,11 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard;
import nu.marginalia.model.idx.DocumentFlags;
import nu.marginalia.model.idx.DocumentMetadata;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException;
import java.util.EnumSet;
import java.util.HashSet;
@@ -51,7 +52,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
private final double minDocumentQuality;
private final FeatureExtractor featureExtractor;
private final TitleExtractor titleExtractor;
private final DocumentKeywordExtractor keywordExtractor;
private final PubDateSniffer pubDateSniffer;
@@ -74,7 +74,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
@Named("min-document-quality") Double minDocumentQuality,
LanguageFilter languageFilter,
FeatureExtractor featureExtractor,
TitleExtractor titleExtractor,
DocumentKeywordExtractor keywordExtractor,
PubDateSniffer pubDateSniffer,
DocumentLengthLogic documentLengthLogic,
@@ -89,7 +88,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
this.minDocumentQuality = minDocumentQuality;
this.featureExtractor = featureExtractor;
this.titleExtractor = titleExtractor;
this.keywordExtractor = keywordExtractor;
this.pubDateSniffer = pubDateSniffer;
this.metaRobotsTag = metaRobotsTag;
@@ -108,19 +106,17 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
public DetailsWithWords createDetails(CrawledDocument crawledDocument,
LinkTexts linkTexts,
DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException {
throws DisqualifiedException, URISyntaxException, IOException {
String documentBody = crawledDocument.documentBody;
if (languageFilter.isBlockedUnicodeRange(documentBody)) {
if (languageFilter.isBlockedUnicodeRange(crawledDocument.documentBody(512))) {
throw new DisqualifiedException(DisqualificationReason.LANGUAGE);
}
if (documentBody.length() > MAX_DOCUMENT_LENGTH_BYTES) { // 128kb
documentBody = documentBody.substring(0, MAX_DOCUMENT_LENGTH_BYTES);
}
Document doc = crawledDocument.parseBody();
Document doc = Jsoup.parse(documentBody);
if (AcceptableAds.hasAcceptableAdsTag(doc)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.ACCEPTABLE_ADS);
}
if (!metaRobotsTag.allowIndexingByMetaTag(doc)) {
throw new DisqualifiedException(DisqualificationReason.FORBIDDEN);
@@ -138,32 +134,33 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
}
var prunedDoc = specialization.prune(doc);
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
var ret = new ProcessedDocumentDetails();
final int length = getLength(doc);
final HtmlStandard standard = getHtmlStandard(doc);
final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length);
if (isDisqualified(documentClass, url, quality, doc.title())) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
var ret = new ProcessedDocumentDetails();
ret.length = length;
ret.standard = standard;
ret.title = specialization.getTitle(doc, dld, crawledDocument.url);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld);
ret.features = features;
ret.quality = documentValuator.adjustQuality(quality, features);
ret.hashCode = dld.localitySensitiveHashCode();
if (isDisqualified(documentClass, url, quality, ret.title)) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true);
EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type());

View File

@@ -71,7 +71,7 @@ public class PlainTextDocumentProcessorPlugin extends AbstractDocumentProcessorP
DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException {
String documentBody = crawledDocument.documentBody;
String documentBody = crawledDocument.documentBody();
if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE);

View File

@@ -0,0 +1,113 @@
package nu.marginalia.converting.processor.plugin.specialization;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import nu.marginalia.converting.processor.logic.TitleExtractor;
import nu.marginalia.converting.processor.summary.SummaryExtractor;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.util.Strings;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
@Singleton
public class CppreferenceSpecialization extends WikiSpecialization {
@Inject
public CppreferenceSpecialization(SummaryExtractor summaryExtractor, TitleExtractor titleExtractor) {
super(summaryExtractor, titleExtractor);
}
@Override
public Document prune(Document original) {
var doc = original.clone();
doc.getElementsByClass("t-nv").remove();
doc.getElementsByClass("toc").remove();
doc.getElementsByClass("mw-head").remove();
doc.getElementsByClass("printfooter").remove();
doc.getElementsByClass("cpp-footer-base").remove();
doc.title(doc.title() + " " + Strings.join(extractExtraTokens(doc.title()), ' '));
return doc;
}
@Override
public String getSummary(Document doc, Set<String> importantWords) {
Element declTable = doc.getElementsByClass("t-dcl-begin").first();
if (declTable != null) {
var nextPar = declTable.nextElementSibling();
if (nextPar != null) {
return nextPar.text();
}
}
return super.getSummary(doc, importantWords);
}
public List<String> extractExtraTokens(String title) {
if (!title.contains("::")) {
return List.of();
}
if (!title.contains("-")) {
return List.of();
}
title = StringUtils.split(title, '-')[0];
String name = title;
for (;;) {
int lbidx = name.indexOf('<');
int rbidx = name.indexOf('>');
if (lbidx > 0 && rbidx > lbidx) {
String className = name.substring(0, lbidx);
String methodName = name.substring(rbidx + 1);
name = className + methodName;
} else {
break;
}
}
List<String> tokens = new ArrayList<>();
for (var part : name.split("\\s*,\\s*")) {
if (part.endsWith(")") && !part.endsWith("()")) {
int parenStart = part.indexOf('(');
if (parenStart > 0) { // foo(...) -> foo
part = part.substring(0, parenStart);
}
else if (parenStart == 0) { // (foo) -> foo
part = part.substring(1, part.length() - 1);
}
}
part = part.trim();
if (part.contains("::")) {
tokens.add(part);
if (part.startsWith("std::")) {
tokens.add(part.substring(5));
int ss = part.indexOf("::", 5);
if (ss > 0) {
tokens.add(part.substring(0, ss));
tokens.add(part.substring(ss+2));
}
}
}
}
return tokens;
}
}

View File

@@ -24,6 +24,7 @@ public class HtmlProcessorSpecializations {
private final WikiSpecialization wikiSpecialization;
private final BlogSpecialization blogSpecialization;
private final GogStoreSpecialization gogStoreSpecialization;
private final CppreferenceSpecialization cppreferenceSpecialization;
private final DefaultSpecialization defaultSpecialization;
@Inject
@@ -37,6 +38,7 @@ public class HtmlProcessorSpecializations {
WikiSpecialization wikiSpecialization,
BlogSpecialization blogSpecialization,
GogStoreSpecialization gogStoreSpecialization,
CppreferenceSpecialization cppreferenceSpecialization,
DefaultSpecialization defaultSpecialization) {
this.domainTypes = domainTypes;
this.lemmySpecialization = lemmySpecialization;
@@ -48,6 +50,7 @@ public class HtmlProcessorSpecializations {
this.wikiSpecialization = wikiSpecialization;
this.blogSpecialization = blogSpecialization;
this.gogStoreSpecialization = gogStoreSpecialization;
this.cppreferenceSpecialization = cppreferenceSpecialization;
this.defaultSpecialization = defaultSpecialization;
}
@@ -66,6 +69,10 @@ public class HtmlProcessorSpecializations {
return mariadbKbSpecialization;
}
if (url.domain.getTopDomain().equals("cppreference.com")) {
return cppreferenceSpecialization;
}
if (url.domain.toString().equals("store.steampowered.com")) {
return steamStoreSpecialization;
}
@@ -86,6 +93,9 @@ public class HtmlProcessorSpecializations {
if (generator.keywords().contains("javadoc")) {
return javadocSpecialization;
}
// Must be toward the end, as some specializations are for
// wiki-generator content
if (generator.type() == GeneratorType.WIKI) {
return wikiSpecialization;
}
@@ -105,7 +115,7 @@ public class HtmlProcessorSpecializations {
boolean shouldIndex(EdgeUrl url);
double lengthModifier();
void amendWords(Document doc, DocumentKeywordsBuilder words);
default void amendWords(Document doc, DocumentKeywordsBuilder words) {}
}
}

View File

@@ -4,7 +4,6 @@ import com.google.inject.Inject;
import com.google.inject.Singleton;
import nu.marginalia.converting.processor.logic.TitleExtractor;
import nu.marginalia.converting.processor.summary.SummaryExtractor;
import nu.marginalia.keyword.model.DocumentKeywordsBuilder;
import nu.marginalia.model.EdgeUrl;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
@@ -93,6 +92,8 @@ public class WikiSpecialization extends DefaultSpecialization {
return true;
}
public void amendWords(Document doc, DocumentKeywordsBuilder words) {
@Override
public double lengthModifier() {
return 2.5;
}
}

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.DocumentMetadata;
import nu.marginalia.model.idx.WordFlags;
import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets;
import java.time.LocalDateTime;
import java.util.EnumSet;
import java.util.List;
@@ -50,7 +51,7 @@ public class SideloaderProcessing {
"OK",
"NP",
"",
body,
body.getBytes(StandardCharsets.UTF_8),
false,
null,
null

View File

@@ -106,11 +106,7 @@ public class WarcSideloader implements SideloadSource, AutoCloseable {
return false;
var url = new EdgeUrl(warcResponse.target());
if (!Objects.equals(url.getDomain(), domain)) {
return false;
}
return true;
return Objects.equals(url.getDomain(), domain);
} catch (Exception e) {
logger.warn("Failed to process response", e);
}

View File

@@ -39,6 +39,9 @@ public class ConverterWriter implements AutoCloseable {
workerThread.start();
}
/** Queue and eventually write the domain into the converter journal
* The domain object will be closed after it's processed.
* */
public void accept(@Nullable ConverterBatchWritableIf domain) {
if (null == domain)
return;
@@ -72,15 +75,15 @@ public class ConverterWriter implements AutoCloseable {
if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) {
logger.warn("Skipping already logged item {}", id);
}
else {
currentWriter.write(data);
workLog.logItem(id);
data.close();
continue;
}
currentWriter.write(data);
workLog.logItem(id);
switcher.tick();
data.close();
}
}
catch (Exception ex) {

View File

@@ -98,7 +98,7 @@ public class ConvertingIntegrationTest {
@Test
public void testMemexMarginaliaNuSideloadProcessing() throws IOException {
var ret = domainProcessor.sideloadProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100);
var ret = domainProcessor.simpleProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100);
assertNotNull(ret);
assertEquals("memex.marginalia.nu", ret.id());
@@ -146,7 +146,7 @@ public class ConvertingIntegrationTest {
"OK",
"",
"",
readClassPathFile(p.toString()),
readClassPathFile(p.toString()).getBytes(),
false,
null,
null

View File

@@ -8,6 +8,7 @@ import nu.marginalia.converting.model.ProcessedDomain;
import nu.marginalia.converting.processor.DomainProcessor;
import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -200,23 +201,23 @@ public class CrawlingThenConvertingIntegrationTest {
@Test
public void crawlRobotsTxt() throws Exception {
var specs = new CrawlerMain.CrawlSpecRecord("search.marginalia.nu", 5,
List.of("https://search.marginalia.nu/search?q=hello+world")
var specs = new CrawlerMain.CrawlSpecRecord("marginalia-search.com", 5,
List.of("https://marginalia-search.com/search?q=hello+world")
);
CrawledDomain domain = crawl(specs);
assertFalse(domain.doc.isEmpty());
assertEquals("OK", domain.crawlerStatus);
assertEquals("search.marginalia.nu", domain.domain);
assertEquals("marginalia-search.com", domain.domain);
Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet());
assertTrue(allUrls.contains("https://search.marginalia.nu/search"), "We expect a record for entities that are forbidden");
assertTrue(allUrls.contains("https://marginalia-search.com/search"), "We expect a record for entities that are forbidden");
var output = process();
assertNotNull(output);
assertFalse(output.documents.isEmpty());
assertEquals(new EdgeDomain("search.marginalia.nu"), output.domain);
assertEquals(new EdgeDomain("marginalia-search.com"), output.domain);
assertEquals(DomainIndexingState.ACTIVE, output.state);
for (var doc : output.documents) {
@@ -246,7 +247,7 @@ public class CrawlingThenConvertingIntegrationTest {
private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception {
List<SerializableCrawlData> data = new ArrayList<>();
try (var recorder = new WarcRecorder(fileName);
try (var recorder = new WarcRecorder(fileName, new Cookies());
var db = new DomainStateDb(dbTempFile))
{
new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain();

View File

@@ -0,0 +1,27 @@
package nu.marginalia.converting.processor.plugin.specialization;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import java.util.List;
class CppreferenceSpecializationTest {
CppreferenceSpecialization specialization = new CppreferenceSpecialization(null, null);
@Test
public void testTitleMagic() {
List<String> ret;
ret = specialization.extractExtraTokens("std::multimap<Key, T, Compare, Allocator>::crend - cppreference.com");
Assertions.assertTrue(ret.contains("std::multimap::crend"));
Assertions.assertTrue(ret.contains("multimap::crend"));
Assertions.assertTrue(ret.contains("std::multimap"));
Assertions.assertTrue(ret.contains("crend"));
ret = specialization.extractExtraTokens("std::coroutine_handle<Promise>::operator(), std::coroutine_handle<Promise>::resume - cppreference.com");
Assertions.assertTrue(ret.contains("std::coroutine_handle::operator()"));
Assertions.assertTrue(ret.contains("std::coroutine_handle::resume"));
}
}

View File

@@ -55,7 +55,6 @@ dependencies {
implementation libs.zstd
implementation libs.jwarc
implementation libs.crawlercommons
implementation libs.okhttp3
implementation libs.jsoup
implementation libs.opencsv
implementation libs.fastutil

View File

@@ -2,11 +2,16 @@ package nu.marginalia.contenttype;
import org.apache.commons.lang3.StringUtils;
import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets;
/** Content type and charset of a document
* @param contentType The content type, e.g. "text/html"
* @param charset The charset, e.g. "UTF-8"
*/
public record ContentType(String contentType, String charset) {
public static ContentType parse(String contentTypeHeader) {
if (contentTypeHeader == null || contentTypeHeader.isBlank())
return new ContentType(null, null);
@@ -15,9 +20,31 @@ public record ContentType(String contentType, String charset) {
String contentType = parts[0].trim();
String charset = parts.length > 1 ? parts[1].trim() : "UTF-8";
if (charset.toLowerCase().startsWith("charset=")) {
charset = charset.substring("charset=".length());
}
return new ContentType(contentType, charset);
}
/** Best effort method for turning the provided charset string into a Java charset method,
* with some guesswork-heuristics for when it doesn't work
*/
public Charset asCharset() {
try {
if (Charset.isSupported(charset)) {
return Charset.forName(charset);
} else if (charset.equalsIgnoreCase("macintosh-latin")) {
return StandardCharsets.ISO_8859_1;
} else {
return StandardCharsets.UTF_8;
}
}
catch (IllegalCharsetNameException ex) { // thrown by Charset.isSupported()
return StandardCharsets.UTF_8;
}
}
public boolean is(String contentType) {
return this.contentType.equalsIgnoreCase(contentType);
}

View File

@@ -1,9 +1,12 @@
package nu.marginalia.contenttype;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets;
import java.nio.charset.UnsupportedCharsetException;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
@@ -23,24 +26,25 @@ public class DocumentBodyToString {
return new String(data, charset);
}
public static Document getParsedData(ContentType type, byte[] data, int maxLength, String url) throws IOException {
final Charset charset;
if (type.charset() == null || type.charset().isBlank()) {
charset = StandardCharsets.UTF_8;
} else {
charset = charsetMap.computeIfAbsent(type, DocumentBodyToString::computeCharset);
}
ByteArrayInputStream bais = new ByteArrayInputStream(data, 0, Math.min(data.length, maxLength));
return Jsoup.parse(bais, charset.name(), url);
}
private static Charset computeCharset(ContentType type) {
try {
if (type.charset() == null || type.charset().isBlank())
return StandardCharsets.UTF_8;
else {
return Charset.forName(type.charset());
}
}
catch (IllegalCharsetNameException ex) {
// Fall back to UTF-8 if we don't understand what this is. It's *probably* fine? Maybe?
return StandardCharsets.UTF_8;
}
catch (UnsupportedCharsetException ex) {
// This is usually like Macintosh Latin
// (https://en.wikipedia.org/wiki/Macintosh_Latin_encoding)
//
// It's close enough to 8859-1 to serve
return StandardCharsets.ISO_8859_1;
return type.asCharset();
}
}
}

View File

@@ -19,22 +19,19 @@ import nu.marginalia.crawl.retreival.DomainProber;
import nu.marginalia.crawl.warc.WarcArchiverFactory;
import nu.marginalia.crawl.warc.WarcArchiverIf;
import nu.marginalia.db.DomainBlacklist;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import nu.marginalia.process.ProcessConfiguration;
import nu.marginalia.process.ProcessConfigurationModule;
import nu.marginalia.process.ProcessMainClass;
import nu.marginalia.process.control.ProcessHeartbeatImpl;
import nu.marginalia.process.log.WorkLog;
import nu.marginalia.service.module.DatabaseModule;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageId;
import nu.marginalia.util.SimpleBlockingThreadPool;
import okhttp3.ConnectionPool;
import okhttp3.Dispatcher;
import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -85,6 +82,7 @@ public class CrawlerMain extends ProcessMainClass {
@Inject
public CrawlerMain(UserAgent userAgent,
HttpFetcherImpl httpFetcher,
ProcessHeartbeatImpl heartbeat,
MessageQueueFactory messageQueueFactory, DomainProber domainProber,
FileStorageService fileStorageService,
@@ -98,6 +96,7 @@ public class CrawlerMain extends ProcessMainClass {
super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX);
this.userAgent = userAgent;
this.fetcher = httpFetcher;
this.heartbeat = heartbeat;
this.domainProber = domainProber;
this.fileStorageService = fileStorageService;
@@ -111,10 +110,6 @@ public class CrawlerMain extends ProcessMainClass {
Integer.getInteger("crawler.poolSize", 256),
1);
fetcher = new HttpFetcherImpl(userAgent,
new Dispatcher(),
new ConnectionPool(5, 10, TimeUnit.SECONDS)
);
// Wait for the blacklist to be loaded before starting the crawl
blacklist.waitUntilLoaded();
@@ -132,6 +127,10 @@ public class CrawlerMain extends ProcessMainClass {
System.setProperty("sun.net.client.defaultConnectTimeout", "30000");
System.setProperty("sun.net.client.defaultReadTimeout", "30000");
// Set the maximum number of connections to keep alive in the connection pool
System.setProperty("jdk.httpclient.idleTimeout", "15"); // 15 seconds
System.setProperty("jdk.httpclient.connectionPoolSize", "256");
// We don't want to use too much memory caching sessions for https
System.setProperty("javax.net.ssl.sessionCacheSize", "2048");
@@ -291,7 +290,6 @@ public class CrawlerMain extends ProcessMainClass {
}
}
public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception {
runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath());
}
@@ -353,7 +351,7 @@ public class CrawlerMain extends ProcessMainClass {
Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE);
Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP);
Path parquetFile = CrawlerOutputFile.createParquetPath(outputDir, id, domain);
Path slopFile = CrawlerOutputFile.createSlopPath(outputDir, id, domain);
// Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data
// while writing to the same file name as before
@@ -364,9 +362,9 @@ public class CrawlerMain extends ProcessMainClass {
Files.deleteIfExists(tempFile);
}
try (var warcRecorder = new WarcRecorder(newWarcFile); // write to a temp file for now
try (var warcRecorder = new WarcRecorder(newWarcFile, fetcher); // write to a temp file for now
var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder);
CrawlDataReference reference = getReference();
CrawlDataReference reference = getReference()
)
{
// Resume the crawl if it was aborted
@@ -387,15 +385,15 @@ public class CrawlerMain extends ProcessMainClass {
reference.delete();
// Convert the WARC file to Parquet
CrawledDocumentParquetRecordFileWriter
.convertWarc(domain, userAgent, newWarcFile, parquetFile);
SlopCrawlDataRecord
.convertWarc(domain, userAgent, newWarcFile, slopFile);
// Optionally archive the WARC file if full retention is enabled,
// otherwise delete it:
warcArchiver.consumeWarc(newWarcFile, domain);
// Mark the domain as finished in the work log
workLog.setJobToFinished(domain, parquetFile.toString(), size);
workLog.setJobToFinished(domain, slopFile.toString(), size);
// Update the progress bar
heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks);
@@ -416,11 +414,22 @@ public class CrawlerMain extends ProcessMainClass {
private CrawlDataReference getReference() {
try {
return new CrawlDataReference(CrawledDomainReader.createDataStream(outputDir, domain, id));
Path slopPath = CrawlerOutputFile.getSlopPath(outputDir, id, domain);
if (Files.exists(slopPath)) {
return new CrawlDataReference(slopPath);
}
Path parquetPath = CrawlerOutputFile.getParquetPath(outputDir, id, domain);
if (Files.exists(parquetPath)) {
slopPath = migrateParquetData(parquetPath, domain, outputDir);
return new CrawlDataReference(slopPath);
}
} catch (IOException e) {
logger.debug("Failed to read previous crawl data for {}", specification.domain());
return new CrawlDataReference();
}
return new CrawlDataReference();
}
}
@@ -480,4 +489,20 @@ public class CrawlerMain extends ProcessMainClass {
}
}
}
// Migrate from parquet to slop if necessary
//
// This must be synchronized as chewing through parquet files in parallel leads to enormous memory overhead
private synchronized Path migrateParquetData(Path inputPath, String domain, Path crawlDataRoot) throws IOException {
if (!inputPath.endsWith(".parquet")) {
return inputPath;
}
Path outputFile = CrawlerOutputFile.createSlopPath(crawlDataRoot, Integer.toHexString(domain.hashCode()), domain);
SlopCrawlDataRecord.convertFromParquet(inputPath, outputFile);
return outputFile;
}
}

View File

@@ -9,6 +9,7 @@ import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.time.Instant;
import java.util.Objects;
import java.util.Optional;
/** Supplemental sqlite database for storing the summary of a crawl.
@@ -60,6 +61,8 @@ public class DomainStateDb implements AutoCloseable {
}
public record FaviconRecord(String contentType, byte[] imageData) {}
public DomainStateDb(Path filename) throws SQLException {
String sqliteDbString = "jdbc:sqlite:" + filename.toString();
connection = DriverManager.getConnection(sqliteDbString);
@@ -74,7 +77,13 @@ public class DomainStateDb implements AutoCloseable {
feedUrl TEXT
)
""");
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS favicon (
domain TEXT PRIMARY KEY,
contentType TEXT NOT NULL,
icon BLOB NOT NULL
)
""");
stmt.execute("PRAGMA journal_mode=WAL");
}
}
@@ -85,6 +94,41 @@ public class DomainStateDb implements AutoCloseable {
}
public void saveIcon(String domain, FaviconRecord faviconRecord) {
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO favicon (domain, contentType, icon)
VALUES(?, ?, ?)
""")) {
stmt.setString(1, domain);
stmt.setString(2, Objects.requireNonNullElse(faviconRecord.contentType, "application/octet-stream"));
stmt.setBytes(3, faviconRecord.imageData);
stmt.executeUpdate();
}
catch (SQLException ex) {
logger.error("Failed to insert favicon", ex);
}
}
public Optional<FaviconRecord> getIcon(String domain) {
try (var stmt = connection.prepareStatement("SELECT contentType, icon FROM favicon WHERE DOMAIN = ?")) {
stmt.setString(1, domain);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(
new FaviconRecord(
rs.getString("contentType"),
rs.getBytes("icon")
)
);
}
} catch (SQLException e) {
logger.error("Failed to retrieve favicon", e);
}
return Optional.empty();
}
public void save(SummaryRecord record) {
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl)

View File

@@ -1,6 +1,6 @@
package nu.marginalia.crawl.fetcher;
import okhttp3.Request;
import java.net.http.HttpRequest;
/** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */
public record ContentTags(String etag, String lastMod) {
@@ -17,14 +17,14 @@ public record ContentTags(String etag, String lastMod) {
}
/** Paints the tags onto the request builder. */
public void paint(Request.Builder getBuilder) {
public void paint(HttpRequest.Builder getBuilder) {
if (etag != null) {
getBuilder.addHeader("If-None-Match", etag);
getBuilder.header("If-None-Match", etag);
}
if (lastMod != null) {
getBuilder.addHeader("If-Modified-Since", lastMod);
getBuilder.header("If-Modified-Since", lastMod);
}
}
}

View File

@@ -1,33 +1,14 @@
package nu.marginalia.crawl.fetcher;
import okhttp3.Cookie;
import okhttp3.CookieJar;
import okhttp3.HttpUrl;
import java.util.Collections;
import java.io.IOException;
import java.net.CookieHandler;
import java.net.URI;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
public class Cookies {
final ThreadLocal<ConcurrentHashMap<String, List<Cookie>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new);
public CookieJar getJar() {
return new CookieJar() {
@Override
public void saveFromResponse(HttpUrl url, List<Cookie> cookies) {
if (!cookies.isEmpty()) {
cookieJar.get().put(url.host(), cookies);
}
}
@Override
public List<Cookie> loadForRequest(HttpUrl url) {
return cookieJar.get().getOrDefault(url.host(), Collections.emptyList());
}
};
}
public class Cookies extends CookieHandler {
final ThreadLocal<ConcurrentHashMap<String, List<String>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new);
public void clear() {
cookieJar.get().clear();
@@ -38,6 +19,16 @@ public class Cookies {
}
public List<String> getCookies() {
return cookieJar.get().values().stream().flatMap(List::stream).map(Cookie::toString).toList();
return cookieJar.get().values().stream().flatMap(List::stream).toList();
}
@Override
public Map<String, List<String>> get(URI uri, Map<String, List<String>> requestHeaders) throws IOException {
return cookieJar.get();
}
@Override
public void put(URI uri, Map<String, List<String>> responseHeaders) throws IOException {
cookieJar.get().putAll(responseHeaders);
}
}

View File

@@ -3,6 +3,7 @@ package nu.marginalia.crawl.fetcher;
import com.google.inject.ImplementedBy;
import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
@@ -11,10 +12,10 @@ import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import java.util.List;
@ImplementedBy(HttpFetcherImpl.class)
public interface HttpFetcher {
public interface HttpFetcher extends AutoCloseable {
void setAllowAllContentTypes(boolean allowAllContentTypes);
List<String> getCookies();
Cookies getCookies();
void clearCookies();
DomainProbeResult probeDomain(EdgeUrl url);
@@ -27,7 +28,9 @@ public interface HttpFetcher {
HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder recorder,
ContentTags tags,
ProbeType probeType) throws HttpFetcherImpl.RateLimitException, Exception;
ProbeType probeType) throws Exception;
List<EdgeUrl> fetchSitemapUrls(String rootSitemapUrl, CrawlDelayTimer delayTimer);
SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder);

View File

@@ -1,35 +1,39 @@
package nu.marginalia.crawl.fetcher;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import crawlercommons.robots.SimpleRobotRules;
import crawlercommons.robots.SimpleRobotRulesParser;
import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.socket.FastTerminatingSocketFactory;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.crawl.fetcher.socket.NoSecuritySSL;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.ContentTypeLogic;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import okhttp3.ConnectionPool;
import okhttp3.Dispatcher;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.parser.Parser;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.net.ssl.X509TrustManager;
import java.io.InterruptedIOException;
import java.io.IOException;
import java.io.InputStream;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpTimeoutException;
import java.time.Duration;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import java.util.*;
import java.util.concurrent.Executors;
import java.util.zip.GZIPInputStream;
@Singleton
public class HttpFetcherImpl implements HttpFetcher {
private final Logger logger = LoggerFactory.getLogger(getClass());
@@ -40,39 +44,28 @@ public class HttpFetcherImpl implements HttpFetcher {
private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser();
private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic();
private final Duration requestTimeout = Duration.ofSeconds(10);
@Override
public void setAllowAllContentTypes(boolean allowAllContentTypes) {
contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes);
}
private final OkHttpClient client;
private final HttpClient client;
private static final FastTerminatingSocketFactory ftSocketFactory = new FastTerminatingSocketFactory();
private OkHttpClient createClient(Dispatcher dispatcher, ConnectionPool pool) {
var builder = new OkHttpClient.Builder();
if (dispatcher != null) {
builder.dispatcher(dispatcher);
}
return builder.sslSocketFactory(NoSecuritySSL.buildSocketFactory(), (X509TrustManager) NoSecuritySSL.trustAllCerts[0])
.socketFactory(ftSocketFactory)
.hostnameVerifier(NoSecuritySSL.buildHostnameVerifyer())
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor())
.connectionPool(pool)
.cookieJar(cookies.getJar())
.followRedirects(true)
.followSslRedirects(true)
.connectTimeout(8, TimeUnit.SECONDS)
.readTimeout(10, TimeUnit.SECONDS)
.writeTimeout(10, TimeUnit.SECONDS)
private HttpClient createClient() {
return HttpClient.newBuilder()
.sslContext(NoSecuritySSL.buildSslContext())
.cookieHandler(cookies)
.followRedirects(HttpClient.Redirect.NORMAL)
.connectTimeout(Duration.ofSeconds(8))
.executor(Executors.newCachedThreadPool())
.build();
}
@Override
public List<String> getCookies() {
return cookies.getCookies();
public Cookies getCookies() {
return cookies;
}
@Override
@@ -81,26 +74,24 @@ public class HttpFetcherImpl implements HttpFetcher {
}
@Inject
public HttpFetcherImpl(UserAgent userAgent,
Dispatcher dispatcher,
ConnectionPool connectionPool)
public HttpFetcherImpl(UserAgent userAgent)
{
this.client = createClient(dispatcher, connectionPool);
this.client = createClient();
this.userAgentString = userAgent.uaString();
this.userAgentIdentifier = userAgent.uaIdentifier();
}
public HttpFetcherImpl(String userAgent) {
this.client = createClient(null, new ConnectionPool());
this.client = createClient();
this.userAgentString = userAgent;
this.userAgentIdentifier = userAgent;
}
// Not necessary in prod, but useful in test
public void close() {
client.dispatcher().executorService().shutdown();
client.connectionPool().evictAll();
client.close();
}
/**
* Probe the domain to see if it is reachable, attempting to identify which schema to use,
* and if there are any redirects. This is done by one or more HEAD requests.
@@ -110,19 +101,26 @@ public class HttpFetcherImpl implements HttpFetcher {
*/
@Override
public DomainProbeResult probeDomain(EdgeUrl url) {
var head = new Request.Builder().head().addHeader("User-agent", userAgentString)
.url(url.toString())
HttpRequest head;
try {
head = HttpRequest.newBuilder()
.HEAD()
.uri(url.asURI())
.header("User-agent", userAgentString)
.timeout(requestTimeout)
.build();
var call = client.newCall(head);
try (var rsp = call.execute()) {
EdgeUrl requestUrl = new EdgeUrl(rsp.request().url().toString());
if (!Objects.equals(requestUrl.domain, url.domain)) {
return new DomainProbeResult.Redirect(requestUrl.domain);
} catch (URISyntaxException e) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid URL");
}
return new DomainProbeResult.Ok(requestUrl);
try {
var rsp = client.send(head, HttpResponse.BodyHandlers.discarding());
EdgeUrl rspUri = new EdgeUrl(rsp.uri());
if (!Objects.equals(rspUri.domain, url.domain)) {
return new DomainProbeResult.Redirect(rspUri.domain);
}
return new DomainProbeResult.Ok(rspUri);
}
catch (Exception ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, ex.getMessage());
@@ -140,21 +138,25 @@ public class HttpFetcherImpl implements HttpFetcher {
WarcRecorder warcRecorder,
ContentTags tags) throws RateLimitException {
if (tags.isEmpty() && contentTypeLogic.isUrlLikeBinary(url)) {
var headBuilder = new Request.Builder().head()
.addHeader("User-agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.url(url.toString());
var head = headBuilder.build();
var call = client.newCall(head);
try {
var headBuilder = HttpRequest.newBuilder()
.HEAD()
.uri(url.asURI())
.header("User-agent", userAgentString)
.header("Accept-Encoding", "gzip")
.timeout(requestTimeout)
;
try (var rsp = call.execute()) {
var contentTypeHeader = rsp.header("Content-type");
var rsp = client.send(headBuilder.build(), HttpResponse.BodyHandlers.discarding());
var headers = rsp.headers();
var contentTypeHeader = headers.firstValue("Content-Type").orElse(null);
if (contentTypeHeader != null && !contentTypeLogic.isAllowableContentType(contentTypeHeader)) {
warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.code());
warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.statusCode());
return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.code());
return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.statusCode());
}
// Update the URL to the final URL of the HEAD request, otherwise we might end up doing
@@ -168,27 +170,27 @@ public class HttpFetcherImpl implements HttpFetcher {
// too many eyebrows when looking at the logs on the target server. Overall it's probably desirable
// that it looks like the traffic makes sense, as opposed to looking like a broken bot.
var redirectUrl = new EdgeUrl(rsp.request().url().toString());
var redirectUrl = new EdgeUrl(rsp.uri());
EdgeUrl ret;
if (Objects.equals(redirectUrl.domain, url.domain)) ret = redirectUrl;
else ret = url;
// Intercept rate limiting
if (rsp.code() == 429) {
throw new HttpFetcherImpl.RateLimitException(Objects.requireNonNullElse(rsp.header("Retry-After"), "1"));
if (rsp.statusCode() == 429) {
throw new HttpFetcherImpl.RateLimitException(headers.firstValue("Retry-After").orElse("1"));
}
return new ContentTypeProbeResult.Ok(ret);
}
catch (HttpTimeoutException ex) {
warcRecorder.flagAsTimeout(url);
return new ContentTypeProbeResult.Timeout(ex);
}
catch (RateLimitException ex) {
throw ex;
}
catch (InterruptedIOException ex) {
warcRecorder.flagAsTimeout(url);
return new ContentTypeProbeResult.Timeout(ex);
} catch (Exception ex) {
catch (Exception ex) {
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
warcRecorder.flagAsError(url, ex);
@@ -210,13 +212,15 @@ public class HttpFetcherImpl implements HttpFetcher {
ProbeType probeType)
throws Exception
{
var getBuilder = new Request.Builder().get();
getBuilder.url(url.toString())
.addHeader("Accept-Encoding", "gzip")
.addHeader("Accept-Language", "en,*;q=0.5")
.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8")
.addHeader("User-agent", userAgentString);
var getBuilder = HttpRequest.newBuilder()
.GET()
.uri(url.asURI())
.header("User-agent", userAgentString)
.header("Accept-Encoding", "gzip")
.header("Accept-Language", "en,*;q=0.5")
.header("Accept", "text/html, application/xhtml+xml, text/*;q=0.8")
.timeout(requestTimeout)
;
contentTags.paint(getBuilder);
@@ -242,6 +246,126 @@ public class HttpFetcherImpl implements HttpFetcher {
return new SitemapRetriever();
}
@Override
public List<EdgeUrl> fetchSitemapUrls(String root, CrawlDelayTimer delayTimer) {
try {
List<EdgeUrl> ret = new ArrayList<>();
Set<String> seenUrls = new HashSet<>();
Set<String> seenSitemaps = new HashSet<>();
Deque<EdgeUrl> sitemapQueue = new LinkedList<>();
EdgeUrl rootSitemapUrl = new EdgeUrl(root);
sitemapQueue.add(rootSitemapUrl);
int fetchedSitemaps = 0;
while (!sitemapQueue.isEmpty() && ret.size() < 20_000 && ++fetchedSitemaps < 10) {
var head = sitemapQueue.removeFirst();
switch (fetchSitemap(head)) {
case SitemapResult.SitemapUrls(List<String> urls) -> {
for (var url : urls) {
if (seenUrls.add(url)) {
EdgeUrl.parse(url)
.filter(u -> u.domain.equals(rootSitemapUrl.domain))
.ifPresent(ret::add);
}
}
}
case SitemapResult.SitemapReferences(List<String> refs) -> {
for (var ref : refs) {
if (seenSitemaps.add(ref)) {
EdgeUrl.parse(ref)
.filter(url -> url.domain.equals(rootSitemapUrl.domain))
.ifPresent(sitemapQueue::addFirst);
}
}
}
case SitemapResult.SitemapError() -> {}
}
delayTimer.waitFetchDelay();
}
return ret;
}
catch (Exception ex) {
logger.error("Error while fetching sitemaps via {}: {} ({})", root, ex.getClass().getSimpleName(), ex.getMessage());
return List.of();
}
}
private SitemapResult fetchSitemap(EdgeUrl sitemapUrl) throws URISyntaxException, IOException, InterruptedException {
HttpRequest getRequest = HttpRequest.newBuilder()
.GET()
.uri(sitemapUrl.asURI())
.header("Accept-Encoding", "gzip")
.header("Accept", "text/*, */*;q=0.9")
.header("User-agent", userAgentString)
.timeout(requestTimeout)
.build();
var response = client.send(getRequest, HttpResponse.BodyHandlers.ofInputStream());
if (response.statusCode() != 200) {
return new SitemapResult.SitemapError();
}
try (InputStream inputStream = response.body()) {
InputStream parserStream;
if (sitemapUrl.path.endsWith(".gz")) {
parserStream = new GZIPInputStream(inputStream);
}
else {
parserStream = inputStream;
}
Document parsedSitemap = Jsoup.parse(parserStream, "UTF-8", sitemapUrl.toString(), Parser.xmlParser());
if (parsedSitemap.childrenSize() == 0) {
return new SitemapResult.SitemapError();
}
String rootTagName = parsedSitemap.child(0).tagName();
return switch (rootTagName.toLowerCase()) {
case "sitemapindex" -> {
List<String> references = new ArrayList<>();
for (var locTag : parsedSitemap.getElementsByTag("loc")) {
references.add(locTag.text().trim());
}
yield new SitemapResult.SitemapReferences(Collections.unmodifiableList(references));
}
case "urlset" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("url > loc")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
case "rss", "atom" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("link, url")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
default -> new SitemapResult.SitemapError();
};
}
}
private sealed interface SitemapResult {
record SitemapUrls(List<String> urls) implements SitemapResult {}
record SitemapReferences(List<String> sitemapRefs) implements SitemapResult {}
record SitemapError() implements SitemapResult {}
}
@Override
public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) {
var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder);
@@ -257,14 +381,15 @@ public class HttpFetcherImpl implements HttpFetcher {
private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) {
try {
var getBuilder = new Request.Builder().get();
var getRequest = HttpRequest.newBuilder()
.GET()
.uri(url.asURI())
.header("Accept-Encoding", "gzip")
.header("Accept", "text/*, */*;q=0.9")
.header("User-agent", userAgentString)
.timeout(requestTimeout);
getBuilder.url(url.toString())
.addHeader("Accept-Encoding", "gzip")
.addHeader("Accept", "text/*, */*;q=0.9")
.addHeader("User-agent", userAgentString);
HttpFetchResult result = recorder.fetch(client, getBuilder.build());
HttpFetchResult result = recorder.fetch(client, getRequest.build());
return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) ->
robotsParser.parseContent(url.toString(),

View File

@@ -1,31 +0,0 @@
package nu.marginalia.crawl.fetcher.socket;
import okhttp3.Interceptor;
import okhttp3.Response;
import org.jetbrains.annotations.NotNull;
import java.io.IOException;
/** An interceptor that intercepts network requests and adds the remote IP address as
* a header in the response. This is used to pass the remote IP address to the Warc
* writer, as this information is not available in the response.
*/
public class IpInterceptingNetworkInterceptor implements Interceptor {
private static final String pseudoHeaderName = "X-Marginalia-Remote-IP";
@NotNull
@Override
public Response intercept(@NotNull Interceptor.Chain chain) throws IOException {
String IP = chain.connection().socket().getInetAddress().getHostAddress();
return chain.proceed(chain.request())
.newBuilder()
.addHeader(pseudoHeaderName, IP)
.build();
}
public static String getIpFromResponse(Response response) {
return response.header(pseudoHeaderName);
}
}

View File

@@ -27,7 +27,7 @@ public class NoSecuritySSL {
}
};
public static SSLSocketFactory buildSocketFactory() {
public static SSLContext buildSslContext() {
try {
// Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("TLS");
@@ -40,14 +40,11 @@ public class NoSecuritySSL {
clientSessionContext.setSessionCacheSize(2048);
// Create a ssl socket factory with our all-trusting manager
return sslContext.getSocketFactory();
return sslContext;
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
public static HostnameVerifier buildHostnameVerifyer() {
return (hn, session) -> true;
}
}

View File

@@ -1,14 +1,14 @@
package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Headers;
import okhttp3.Response;
import org.apache.commons.io.input.BOMInputStream;
import org.netpreserve.jwarc.WarcTruncationReason;
import java.io.*;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Objects;
import java.util.Map;
import java.util.zip.GZIPInputStream;
/** Input buffer for temporary storage of a HTTP response
@@ -17,8 +17,9 @@ import java.util.zip.GZIPInputStream;
* */
public abstract class WarcInputBuffer implements AutoCloseable {
protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED;
protected Headers headers;
WarcInputBuffer(Headers headers) {
protected HttpHeaders headers;
WarcInputBuffer(HttpHeaders headers) {
this.headers = headers;
}
@@ -30,7 +31,7 @@ public abstract class WarcInputBuffer implements AutoCloseable {
public final WarcTruncationReason truncationReason() { return truncationReason; }
public final Headers headers() { return headers; }
public final HttpHeaders headers() { return headers; }
/** Create a buffer for a response.
* If the response is small and not compressed, it will be stored in memory.
@@ -38,26 +39,27 @@ public abstract class WarcInputBuffer implements AutoCloseable {
* and suppressed from the headers.
* If an error occurs, a buffer will be created with no content and an error status.
*/
static WarcInputBuffer forResponse(Response rsp) {
static WarcInputBuffer forResponse(HttpResponse<InputStream> rsp) {
if (rsp == null)
return new ErrorBuffer();
try {
String contentLengthHeader = Objects.requireNonNullElse(rsp.header("Content-Length"), "-1");
int contentLength = Integer.parseInt(contentLengthHeader);
String contentEncoding = rsp.header("Content-Encoding");
var headers = rsp.headers();
try (var is = rsp.body()) {
int contentLength = (int) headers.firstValueAsLong("Content-Length").orElse(-1L);
String contentEncoding = headers.firstValue("Content-Encoding").orElse(null);
if (contentEncoding == null && contentLength > 0 && contentLength < 8192) {
// If the content is small and not compressed, we can just read it into memory
return new MemoryBuffer(rsp, contentLength);
return new MemoryBuffer(headers, is, contentLength);
}
else {
// Otherwise, we unpack it into a file and read it from there
return new FileBuffer(rsp);
return new FileBuffer(headers, is);
}
}
catch (Exception ex) {
return new ErrorBuffer(rsp);
return new ErrorBuffer();
}
}
@@ -99,12 +101,8 @@ public abstract class WarcInputBuffer implements AutoCloseable {
/** Pseudo-buffer for when we have an error */
class ErrorBuffer extends WarcInputBuffer {
public ErrorBuffer() {
super(Headers.of());
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
super(HttpHeaders.of(Map.of(), (k,v)->false));
public ErrorBuffer(Response rsp) {
super(rsp.headers());
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
@@ -125,12 +123,12 @@ class ErrorBuffer extends WarcInputBuffer {
/** Buffer for when we have the response in memory */
class MemoryBuffer extends WarcInputBuffer {
byte[] data;
public MemoryBuffer(Response response, int size) {
super(response.headers());
public MemoryBuffer(HttpHeaders headers, InputStream responseStream, int size) {
super(headers);
var outputStream = new ByteArrayOutputStream(size);
copy(response.body().byteStream(), outputStream);
copy(responseStream, outputStream);
data = outputStream.toByteArray();
}
@@ -154,19 +152,15 @@ class MemoryBuffer extends WarcInputBuffer {
class FileBuffer extends WarcInputBuffer {
private final Path tempFile;
public FileBuffer(Response response) throws IOException {
super(suppressContentEncoding(response.headers()));
public FileBuffer(HttpHeaders headers, InputStream responseStream) throws IOException {
super(suppressContentEncoding(headers));
this.tempFile = Files.createTempFile("rsp", ".html");
if (response.body() == null) {
truncationReason = WarcTruncationReason.DISCONNECT;
return;
}
if ("gzip".equals(response.header("Content-Encoding"))) {
if ("gzip".equalsIgnoreCase(headers.firstValue("Content-Encoding").orElse(""))) {
try (var out = Files.newOutputStream(tempFile)) {
copy(new GZIPInputStream(response.body().byteStream()), out);
copy(new GZIPInputStream(responseStream), out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
@@ -174,7 +168,7 @@ class FileBuffer extends WarcInputBuffer {
}
else {
try (var out = Files.newOutputStream(tempFile)) {
copy(response.body().byteStream(), out);
copy(responseStream, out);
}
catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED;
@@ -182,22 +176,13 @@ class FileBuffer extends WarcInputBuffer {
}
}
private static Headers suppressContentEncoding(Headers headers) {
var builder = new Headers.Builder();
headers.toMultimap().forEach((k, values) -> {
private static HttpHeaders suppressContentEncoding(HttpHeaders headers) {
return HttpHeaders.of(headers.map(), (k, v) -> {
if ("Content-Encoding".equalsIgnoreCase(k)) {
return;
}
if ("Transfer-Encoding".equalsIgnoreCase(k)) {
return;
}
for (var value : values) {
builder.add(k, value);
return false;
}
return !"Transfer-Encoding".equalsIgnoreCase(k);
});
return builder.build();
}

View File

@@ -1,11 +1,12 @@
package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Protocol;
import okhttp3.Response;
import org.apache.commons.lang3.StringUtils;
import java.net.URI;
import java.net.URLEncoder;
import java.net.http.HttpClient;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets;
import java.util.*;
import java.util.stream.Collectors;
@@ -75,13 +76,13 @@ public class WarcProtocolReconstructor {
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
}
static String getResponseHeader(Response response, long size) {
String version = response.protocol() == Protocol.HTTP_1_1 ? "1.1" : "2.0";
static String getResponseHeader(HttpResponse<?> response, long size) {
String version = response.version() == HttpClient.Version.HTTP_1_1 ? "1.1" : "2.0";
String statusCode = String.valueOf(response.code());
String statusMessage = STATUS_CODE_MAP.getOrDefault(response.code(), "Unknown");
String statusCode = String.valueOf(response.statusCode());
String statusMessage = STATUS_CODE_MAP.getOrDefault(response.statusCode(), "Unknown");
String headerString = getHeadersAsString(response, size);
String headerString = getHeadersAsString(response.headers(), size);
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
}
@@ -148,10 +149,10 @@ public class WarcProtocolReconstructor {
return joiner.toString();
}
static private String getHeadersAsString(Response response, long responseSize) {
static private String getHeadersAsString(HttpHeaders headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n");
response.headers().toMultimap().forEach((k, values) -> {
headers.map().forEach((k, values) -> {
String headerCapitalized = capitalizeHeader(k);
// Omit pseudoheaders injected by the crawler itself
@@ -179,8 +180,8 @@ public class WarcProtocolReconstructor {
return joiner.toString();
}
// okhttp gives us flattened headers, so we need to reconstruct Camel-Kebab-Case style
// for the WARC parser's sake...
// okhttp gave us flattened headers, so we need to reconstruct Camel-Kebab-Case style
// for the WARC parser's sake... (do we still need this, mr chesterton?)
static private String capitalizeHeader(String k) {
return Arrays.stream(StringUtils.split(k, '-'))
.map(StringUtils::capitalize)

View File

@@ -1,13 +1,11 @@
package nu.marginalia.crawl.fetcher.warc;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import org.jetbrains.annotations.Nullable;
import org.netpreserve.jwarc.*;
import org.slf4j.Logger;
@@ -18,16 +16,19 @@ import java.io.InputStream;
import java.net.InetAddress;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.security.NoSuchAlgorithmException;
import java.time.Duration;
import java.time.Instant;
import java.util.*;
/** Based on JWarc's fetch method, APL 2.0 license
* <p></p>
* This class wraps OkHttp's OkHttpClient and records the HTTP request and response in a WARC file,
* This class wraps HttpClient and records the HTTP request and response in a WARC file,
* as best is possible given not all the data is available at the same time and needs to
* be reconstructed.
*/
@@ -47,20 +48,22 @@ public class WarcRecorder implements AutoCloseable {
// Affix a version string in case we need to change the format in the future
// in some way
private final String warcRecorderVersion = "1.0";
// We need to know if the site uses cookies so this can be reported among the search results
// -- flip this to true if we see any cookies. This information will also be painted on any
// revisited pages. It's not 100% perfect and a bit order dependent, but it's good enough.
private final WarcXCookieInformationHeader cookieInformation = new WarcXCookieInformationHeader();
private final Cookies cookies;
/**
* Create a new WarcRecorder that will write to the given file
*
* @param warcFile The file to write to
*/
public WarcRecorder(Path warcFile) throws IOException {
public WarcRecorder(Path warcFile, HttpFetcherImpl fetcher) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = fetcher.getCookies();
}
public WarcRecorder(Path warcFile, Cookies cookies) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = cookies;
}
/**
@@ -70,36 +73,45 @@ public class WarcRecorder implements AutoCloseable {
public WarcRecorder() throws IOException {
this.warcFile = Files.createTempFile("warc", ".warc.gz");
this.writer = new WarcWriter(this.warcFile);
this.cookies = new Cookies();
temporaryFile = true;
}
public HttpFetchResult fetch(OkHttpClient client, Request request) throws NoSuchAlgorithmException,
IOException,
URISyntaxException,
InterruptedException
public HttpFetchResult fetch(HttpClient client,
java.net.http.HttpRequest request)
throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
{
URI requestUri = request.url().uri();
URI requestUri = request.uri();
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
String ip;
Instant date = Instant.now();
var call = client.newCall(request);
// Not entirely sure why we need to do this, but keeping it due to Chesterton's Fence
Map<String, List<String>> extraHeaders = new HashMap<>(request.headers().map());
cookieInformation.update(client, request.url());
HttpResponse<InputStream> response;
try {
response = client.send(request, java.net.http.HttpResponse.BodyHandlers.ofInputStream());
}
catch (Exception ex) {
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex);
}
try (var response = call.execute();
WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response))
try (WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response);
InputStream inputStream = inputBuffer.read())
{
if (cookies.hasCookies()) {
extraHeaders.put("X-Has-Cookies", List.of("1"));
}
byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length);
InputStream inputStream = inputBuffer.read();
ip = IpInterceptingNetworkInterceptor.getIpFromResponse(response);
responseDataBuffer.put(responseHeaders);
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
@@ -123,17 +135,15 @@ public class WarcRecorder implements AutoCloseable {
// It looks like this might be the same as requestUri, but it's not;
// it's the URI after resolving redirects.
final URI responseUri = response.request().url().uri();
final URI responseUri = response.uri();
WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri)
.blockDigest(responseDigestBuilder.build())
.date(date)
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(responseBuilder);
if (ip != null) responseBuilder.ipAddress(InetAddress.getByName(ip));
InetAddress inetAddress = InetAddress.getByName(responseUri.getHost());
responseBuilder.ipAddress(inetAddress);
responseBuilder.payloadDigest(payloadDigestBuilder.build());
responseBuilder.truncated(inputBuffer.truncationReason());
@@ -150,8 +160,8 @@ public class WarcRecorder implements AutoCloseable {
byte[] httpRequestString = WarcProtocolReconstructor
.getHttpRequestString(
response.request().method(),
response.request().headers().toMultimap(),
request.headers().toMultimap(),
response.request().headers().map(),
extraHeaders,
requestUri)
.getBytes();
@@ -167,10 +177,29 @@ public class WarcRecorder implements AutoCloseable {
warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcRequest);
if (Duration.between(date, Instant.now()).compareTo(Duration.ofSeconds(9)) > 0
&& inputBuffer.size() < 2048
&& !request.uri().getPath().endsWith("robots.txt")) // don't bail on robots.txt
{
// Fast detection and mitigation of crawler traps that respond with slow
// small responses, with a high branching factor
// Note we bail *after* writing the warc records, this will effectively only
// prevent link extraction from the document.
logger.warn("URL {} took too long to fetch ({}s) and was too small for the effort ({}b)",
requestUri,
Duration.between(date, Instant.now()).getSeconds(),
inputBuffer.size()
);
return new HttpFetchResult.ResultException(new IOException("Likely crawler trap"));
}
return new HttpFetchResult.ResultOk(responseUri,
response.code(),
response.statusCode(),
inputBuffer.headers(),
ip,
inetAddress.getHostAddress(),
responseDataBuffer.data,
dataStart,
responseDataBuffer.length() - dataStart);
@@ -185,7 +214,7 @@ public class WarcRecorder implements AutoCloseable {
writer.write(item);
}
private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags contentTags) {
private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags contentTags) {
try {
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
@@ -195,7 +224,7 @@ public class WarcRecorder implements AutoCloseable {
if (documentBody == null) {
bytes = new byte[0];
} else {
bytes = documentBody.getBytes();
bytes = documentBody;
}
// Create a synthesis of custom headers and the original headers
@@ -246,7 +275,9 @@ public class WarcRecorder implements AutoCloseable {
.date(Instant.now())
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(builder);
if (cookies.hasCookies()) {
builder.addHeader("X-Has-Cookies", "1");
}
var reference = builder.build();
@@ -264,7 +295,7 @@ public class WarcRecorder implements AutoCloseable {
* an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this
* scenario we want to record the data as it was in the previous crawl, but not re-fetch it.
*/
public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags ctags) {
public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags ctags) {
saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags);
}

View File

@@ -4,6 +4,7 @@ import nu.marginalia.ContentTypes;
import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.lsh.EasyLSH;
import nu.marginalia.model.crawldata.CrawledDocument;
import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -11,54 +12,76 @@ import javax.annotation.Nullable;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Iterator;
import java.util.Objects;
import java.util.Optional;
/** A reference to a domain that has been crawled before. */
public class CrawlDataReference implements AutoCloseable {
public class CrawlDataReference implements AutoCloseable, Iterable<CrawledDocument> {
private boolean closed = false;
@Nullable
private final Path path;
@Nullable
private SerializableCrawlDataStream data = null;
private final SerializableCrawlDataStream data;
private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class);
public CrawlDataReference(SerializableCrawlDataStream data) {
this.data = data;
public CrawlDataReference(@Nullable Path path) {
this.path = path;
}
public CrawlDataReference() {
this(SerializableCrawlDataStream.empty());
this(null);
}
/** Delete the associated data from disk, if it exists */
public void delete() throws IOException {
Path filePath = data.path();
if (filePath != null) {
Files.deleteIfExists(filePath);
if (path != null) {
Files.deleteIfExists(path);
}
}
/** Get the next document from the crawl data,
* returning null when there are no more documents
* available
*/
@Nullable
public CrawledDocument nextDocument() {
public @NotNull Iterator<CrawledDocument> iterator() {
requireStream();
// Guaranteed by requireStream, but helps java
Objects.requireNonNull(data);
return data.map(next -> {
if (next instanceof CrawledDocument doc && ContentTypes.isAccepted(doc.contentType)) {
return Optional.of(doc);
}
else {
return Optional.empty();
}
});
}
/** After calling this method, data is guaranteed to be non-null */
private void requireStream() {
if (closed) {
throw new IllegalStateException("Use after close()");
}
if (data == null) {
try {
while (data.hasNext()) {
if (data.next() instanceof CrawledDocument doc) {
if (!ContentTypes.isAccepted(doc.contentType))
continue;
return doc;
if (path != null) {
data = SerializableCrawlDataStream.openDataStream(path);
return;
}
}
}
catch (IOException ex) {
logger.error("Failed to read next document", ex);
catch (Exception ex) {
logger.error("Failed to open stream", ex);
}
return null;
data = SerializableCrawlDataStream.empty();
}
}
public static boolean isContentBodySame(String one, String other) {
public static boolean isContentBodySame(byte[] one, byte[] other) {
final long contentHashOne = contentHash(one);
final long contentHashOther = contentHash(other);
@@ -66,7 +89,7 @@ public class CrawlDataReference implements AutoCloseable {
return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4;
}
private static long contentHash(String content) {
private static long contentHash(byte[] content) {
EasyLSH hash = new EasyLSH();
int next = 0;
@@ -74,8 +97,8 @@ public class CrawlDataReference implements AutoCloseable {
// In a naive best-effort fashion, extract the text
// content of the document and feed it into the LSH
for (int i = 0; i < content.length(); i++) {
char c = content.charAt(i);
for (byte b : content) {
char c = (char) b;
if (c == '<') {
isInTag = true;
} else if (c == '>') {
@@ -98,7 +121,12 @@ public class CrawlDataReference implements AutoCloseable {
}
@Override
public void close() throws Exception {
public void close() throws IOException {
if (!closed) {
if (data != null) {
data.close();
}
closed = true;
}
}
}

View File

@@ -12,7 +12,6 @@ import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.logic.LinkFilterSelector;
import nu.marginalia.crawl.retreival.revisit.CrawlerRevisitor;
import nu.marginalia.crawl.retreival.revisit.DocumentWithReference;
import nu.marginalia.crawl.retreival.sitemap.SitemapFetcher;
import nu.marginalia.ip_blocklist.UrlBlocklist;
import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain;
@@ -20,7 +19,6 @@ import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.jsoup.Jsoup;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -53,7 +51,6 @@ public class CrawlerRetreiver implements AutoCloseable {
private final WarcRecorder warcRecorder;
private final CrawlerRevisitor crawlerRevisitor;
private final SitemapFetcher sitemapFetcher;
int errorCount = 0;
public CrawlerRetreiver(HttpFetcher fetcher,
@@ -71,7 +68,6 @@ public class CrawlerRetreiver implements AutoCloseable {
crawlFrontier = new DomainCrawlFrontier(new EdgeDomain(domain), specs.urls(), specs.crawlDepth());
crawlerRevisitor = new CrawlerRevisitor(crawlFrontier, this, warcRecorder);
sitemapFetcher = new SitemapFetcher(crawlFrontier, fetcher.createSitemapRetriever());
// We must always crawl the index page first, this is assumed when fingerprinting the server
var fst = crawlFrontier.peek();
@@ -93,47 +89,23 @@ public class CrawlerRetreiver implements AutoCloseable {
}
public int crawlDomain(DomainLinks domainLinks, CrawlDataReference oldCrawlData) {
try {
try (oldCrawlData) {
// Do an initial domain probe to determine the root URL
EdgeUrl rootUrl;
var probeResult = probeRootUrl();
switch (probeResult) {
return switch (probeResult) {
case HttpFetcher.DomainProbeResult.Ok(EdgeUrl probedUrl) -> {
rootUrl = probedUrl; // Good track
}
case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString()));
return 1;
}
case HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus status, String desc) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc));
return 1;
}
}
// Sleep after the initial probe, we don't have access to the robots.txt yet
// so we don't know the crawl delay
TimeUnit.SECONDS.sleep(1);
return crawlDomain(oldCrawlData, rootUrl, domainLinks);
}
catch (Exception ex) {
logger.error("Error crawling domain {}", domain, ex);
return 0;
}
}
private int crawlDomain(CrawlDataReference oldCrawlData,
EdgeUrl rootUrl,
DomainLinks domainLinks) throws InterruptedException {
final SimpleRobotRules robotsRules = fetcher.fetchRobotRules(rootUrl.domain, warcRecorder);
final SimpleRobotRules robotsRules = fetcher.fetchRobotRules(probedUrl.domain, warcRecorder);
final CrawlDelayTimer delayTimer = new CrawlDelayTimer(robotsRules.getCrawlDelay());
delayTimer.waitFetchDelay(0); // initial delay after robots.txt
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(rootUrl, delayTimer);
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(probedUrl, delayTimer);
domainStateDb.save(summaryRecord);
// Play back the old crawl data (if present) and fetch the documents comparing etags and last-modified
@@ -142,12 +114,40 @@ public class CrawlerRetreiver implements AutoCloseable {
crawlFrontier.increaseDepth(1.5, 2500);
}
oldCrawlData.close(); // proactively close the crawl data reference here to not hold onto expensive resources
yield crawlDomain(probedUrl, robotsRules, delayTimer, domainLinks);
}
case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString()));
yield 1;
}
case HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus status, String desc) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc));
yield 1;
}
};
}
catch (Exception ex) {
logger.error("Error crawling domain {}", domain, ex);
return 0;
}
}
private int crawlDomain(EdgeUrl rootUrl,
SimpleRobotRules robotsRules,
CrawlDelayTimer delayTimer,
DomainLinks domainLinks) {
// Add external links to the crawl frontier
crawlFrontier.addAllToQueue(domainLinks.getUrls(rootUrl.proto));
// Add links from the sitemap to the crawl frontier
sitemapFetcher.downloadSitemaps(robotsRules, rootUrl);
// Fetch sitemaps
for (var sitemap : robotsRules.getSitemaps()) {
crawlFrontier.addAllToQueue(fetcher.fetchSitemapUrls(sitemap, delayTimer));
}
while (!crawlFrontier.isEmpty()
&& !crawlFrontier.isCrawlDepthReached()
@@ -271,13 +271,19 @@ public class CrawlerRetreiver implements AutoCloseable {
}
// Download the sitemap if available
if (feedLink.isPresent()) {
sitemapFetcher.downloadSitemaps(List.of(feedLink.get()));
timer.waitFetchDelay(0);
}
feedLink.ifPresent(s -> fetcher.fetchSitemapUrls(s, timer));
// Grab the favicon if it exists
fetchWithRetry(faviconUrl, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty());
if (fetchWithRetry(faviconUrl, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty()) instanceof HttpFetchResult.ResultOk iconResult) {
String contentType = iconResult.header("Content-Type");
byte[] iconData = iconResult.getBodyBytes();
domainStateDb.saveIcon(
domain,
new DomainStateDb.FaviconRecord(contentType, iconData)
);
}
timer.waitFetchDelay(0);
}
@@ -382,14 +388,14 @@ public class CrawlerRetreiver implements AutoCloseable {
else if (fetchedDoc instanceof HttpFetchResult.Result304Raw && reference.doc() != null) {
var doc = reference.doc();
warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBody, doc.headers, contentTags);
warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBodyBytes, doc.headers, contentTags);
fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url,
new ContentType(doc.contentType, "UTF-8"),
doc.documentBody);
doc.documentBodyBytes);
if (doc.documentBody != null) {
var parsed = Jsoup.parse(doc.documentBody);
if (doc.documentBodyBytes != null) {
var parsed = doc.parseBody();
crawlFrontier.enqueueLinksFromDocument(top, parsed);
crawlFrontier.addVisited(top);

View File

@@ -1,6 +1,5 @@
package nu.marginalia.crawl.retreival.revisit;
import com.google.common.base.Strings;
import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -11,7 +10,8 @@ import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument;
import org.jsoup.Jsoup;
import java.io.IOException;
/** This class encapsulates the logic for re-visiting a domain that has already been crawled.
* We may use information from the previous crawl to inform the next crawl, specifically the
@@ -40,18 +40,12 @@ public class CrawlerRevisitor {
int errors = 0;
int skipped = 0;
for (;;) {
for (CrawledDocument doc : oldCrawlData) {
if (errors > 20) {
// If we've had too many errors, we'll stop trying to recrawl
break;
}
CrawledDocument doc = oldCrawlData.nextDocument();
if (doc == null)
break;
// This Shouldn't Happen (TM)
var urlMaybe = EdgeUrl.parse(doc.url);
if (urlMaybe.isEmpty())
continue;
@@ -70,7 +64,7 @@ public class CrawlerRevisitor {
// unlikely to produce anything meaningful for us.
if (doc.httpStatus != 200)
continue;
if (Strings.isNullOrEmpty(doc.documentBody))
if (!doc.hasBody())
continue;
if (!crawlFrontier.filterLink(url))
@@ -117,14 +111,19 @@ public class CrawlerRevisitor {
// fashion to make sure we eventually catch changes over time
// and ensure we discover new links
try {
// Hoover up any links from the document
crawlFrontier.enqueueLinksFromDocument(url, Jsoup.parse(doc.documentBody));
crawlFrontier.enqueueLinksFromDocument(url, doc.parseBody());
}
catch (IOException ex) {
//
}
// Add a WARC record so we don't repeat this
warcRecorder.writeReferenceCopy(url,
doc.contentType,
doc.httpStatus,
doc.documentBody,
doc.documentBodyBytes,
doc.headers,
new ContentTags(doc.etagMaybe, doc.lastModifiedMaybe)
);

View File

@@ -2,8 +2,6 @@ package nu.marginalia.crawl.retreival.revisit;
import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.retreival.CrawlDataReference;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.DocumentBodyResult;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument;
@@ -35,21 +33,17 @@ public record DocumentWithReference(
return false;
if (doc == null)
return false;
if (doc.documentBody == null)
if (doc.documentBodyBytes.length == 0)
return false;
if (!(DocumentBodyExtractor.asString(resultOk) instanceof DocumentBodyResult.Ok<String> bodyOk)) {
return false;
}
return CrawlDataReference.isContentBodySame(doc.documentBody, bodyOk.body());
return CrawlDataReference.isContentBodySame(doc.documentBodyBytes, resultOk.bytesRaw());
}
public ContentTags getContentTags() {
if (null == doc)
return ContentTags.empty();
if (doc.documentBody == null || doc.httpStatus != 200)
if (doc.documentBodyBytes.length == 0 || doc.httpStatus != 200)
return ContentTags.empty();
String lastmod = doc.getLastModified();

View File

@@ -1,72 +0,0 @@
package nu.marginalia.crawl.retreival.sitemap;
import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.SitemapRetriever;
import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.HashSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
public class SitemapFetcher {
private final DomainCrawlFrontier crawlFrontier;
private final SitemapRetriever sitemapRetriever;
private static final Logger logger = LoggerFactory.getLogger(SitemapFetcher.class);
public SitemapFetcher(DomainCrawlFrontier crawlFrontier, SitemapRetriever sitemapRetriever) {
this.crawlFrontier = crawlFrontier;
this.sitemapRetriever = sitemapRetriever;
}
public void downloadSitemaps(SimpleRobotRules robotsRules, EdgeUrl rootUrl) {
List<String> urls = robotsRules.getSitemaps();
if (urls.isEmpty()) {
urls = List.of(rootUrl.withPathAndParam("/sitemap.xml", null).toString());
}
downloadSitemaps(urls);
}
public void downloadSitemaps(List<String> urls) {
Set<String> checkedSitemaps = new HashSet<>();
for (var rawUrl : urls) {
Optional<EdgeUrl> parsedUrl = EdgeUrl.parse(rawUrl);
if (parsedUrl.isEmpty()) {
continue;
}
EdgeUrl url = parsedUrl.get();
// Let's not download sitemaps from other domains for now
if (!crawlFrontier.isSameDomain(url)) {
continue;
}
if (checkedSitemaps.contains(url.path))
continue;
var sitemap = sitemapRetriever.fetchSitemap(url);
if (sitemap.isEmpty()) {
continue;
}
// ensure we don't try to download this sitemap again
// (don't move this up, as we may want to check the same
// path with different protocols until we find one that works)
checkedSitemaps.add(url.path);
crawlFrontier.addAllToQueue(sitemap);
}
logger.debug("Queue is now {}", crawlFrontier.queueSize());
}
}

View File

@@ -32,11 +32,11 @@ dependencies {
implementation libs.bundles.parquet
implementation libs.trove
implementation libs.slop
implementation libs.jwarc
implementation libs.gson
implementation libs.commons.io
implementation libs.commons.lang3
implementation libs.okhttp3
implementation libs.jsoup
implementation libs.snakeyaml
implementation libs.zstd

View File

@@ -1,45 +0,0 @@
package nu.marginalia.io;
import nu.marginalia.io.crawldata.format.ParquetSerializableCrawlDataStream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
public class CrawledDomainReader {
private static final Logger logger = LoggerFactory.getLogger(CrawledDomainReader.class);
/** An iterator-like access to domain data This must be closed otherwise it will leak off-heap memory! */
public static SerializableCrawlDataStream createDataStream(Path fullPath) throws IOException
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
try {
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
} else {
logger.error("Unknown file type: {}", fullPath);
return SerializableCrawlDataStream.empty();
}
}
/** An iterator-like access to domain data. This must be closed otherwise it will leak off-heap memory! */
public static SerializableCrawlDataStream createDataStream(Path basePath, String domain, String id) throws IOException {
Path parquetPath = CrawlerOutputFile.getParquetPath(basePath, id, domain);
if (Files.exists(parquetPath)) {
return createDataStream(parquetPath);
}
else {
throw new FileNotFoundException("No such file: " + parquetPath);
}
}
}

View File

@@ -35,7 +35,7 @@ public class CrawlerOutputFile {
return destDir.resolve(id + "-" + filesystemSafeName(domain) + "-" + version.suffix + ".warc.gz");
}
public static Path createParquetPath(Path basePath, String id, String domain) throws IOException {
public static Path createSlopPath(Path basePath, String id, String domain) throws IOException {
id = padId(id);
String first = id.substring(0, 2);
@@ -45,8 +45,9 @@ public class CrawlerOutputFile {
if (!Files.exists(destDir)) {
Files.createDirectories(destDir);
}
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet");
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".slop.zip");
}
public static Path getParquetPath(Path basePath, String id, String domain) {
id = padId(id);
@@ -56,16 +57,18 @@ public class CrawlerOutputFile {
Path destDir = basePath.resolve(first).resolve(second);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet");
}
public static Path getWarcPath(Path basePath, String id, String domain, WarcFileVersion version) {
public static Path getSlopPath(Path basePath, String id, String domain) {
id = padId(id);
String first = id.substring(0, 2);
String second = id.substring(2, 4);
Path destDir = basePath.resolve(first).resolve(second);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".warc" + version.suffix);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".slop.zip");
}
/**
* Pads the given ID with leading zeros to ensure it has a length of 4 characters.
*/

View File

@@ -1,35 +1,120 @@
package nu.marginalia.io;
import nu.marginalia.io.crawldata.format.ParquetSerializableCrawlDataStream;
import nu.marginalia.io.crawldata.format.SlopSerializableCrawlDataStream;
import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawledDomain;
import nu.marginalia.model.crawldata.SerializableCrawlData;
import org.jetbrains.annotations.Nullable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
/** Closable iterator exceptional over serialized crawl data
* The data may appear in any order, and the iterator must be closed.
*
* @see CrawledDomainReader
* */
public interface SerializableCrawlDataStream extends AutoCloseable {
Logger logger = LoggerFactory.getLogger(SerializableCrawlDataStream.class);
SerializableCrawlData next() throws IOException;
/** Return a size hint for the stream. 0 is returned if the hint is not available,
* or if the file is seemed too small to bother */
default int sizeHint() { return 0; }
default int getSizeHint() { return 0; }
boolean hasNext() throws IOException;
@Nullable
default Path path() { return null; }
void close() throws IOException;
/** An iterator-like access to domain data This must be closed otherwise it will leak off-heap memory! */
static SerializableCrawlDataStream openDataStream(Path fullPath) throws IOException
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
try {
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
if (fileName.endsWith(".slop.zip")) {
try {
return new SlopSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
logger.error("Unknown file type: {}", fullPath);
return SerializableCrawlDataStream.empty();
}
/** Get an idication of the size of the stream. This is used to determine whether to
* load the stream into memory or not. 0 is returned if the hint is not available,
* or if the file is seemed too small to bother */
static int getSizeHint(Path fullPath) {
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
return ParquetSerializableCrawlDataStream.sizeHint(fullPath);
}
else if (fileName.endsWith(".slop.zip")) {
return SlopSerializableCrawlDataStream.sizeHint(fullPath);
}
else {
return 0;
}
}
default <T> Iterator<T> map(Function<SerializableCrawlData, Optional<T>> mapper) {
return new Iterator<>() {
T next = null;
public boolean hasNext() {
if (next != null)
return true;
try {
while (SerializableCrawlDataStream.this.hasNext()) {
var val = mapper.apply(SerializableCrawlDataStream.this.next());
if (val.isPresent()) {
next = val.get();
return true;
}
}
}
catch (IOException ex) {
logger.error("Error during stream", ex);
}
return false;
}
public T next() {
if (next == null && !hasNext())
throw new IllegalStateException("No more data to read");
T ret = next;
next = null;
return ret;
}
};
}
/** For tests */
default List<SerializableCrawlData> asList() throws IOException {
List<SerializableCrawlData> data = new ArrayList<>();
@@ -81,7 +166,6 @@ public interface SerializableCrawlDataStream extends AutoCloseable {
public boolean hasNext() { return iterator.hasNext(); }
public void close() {}
};
}
}

View File

@@ -1,7 +1,6 @@
package nu.marginalia.io.crawldata.format;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.contenttype.DocumentBodyToString;
import nu.marginalia.hash.MurmurHash3_128;
import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeUrl;
@@ -18,6 +17,7 @@ import java.nio.file.Path;
import java.util.*;
import java.util.stream.Stream;
@Deprecated
public class ParquetSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream {
private static final Logger logger = LoggerFactory.getLogger(ParquetSerializableCrawlDataStream.class);
@@ -40,7 +40,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
return path;
}
public int sizeHint() {
public static int sizeHint(Path path) {
// Only calculate size hint for large files
// (the reason we calculate them in the first place is to assess whether it is large
// because it has many documents, or because it is a small number of large documents)
@@ -124,9 +124,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
}
else if (nextRecord.body != null) {
try {
bodyString = DocumentBodyToString.getStringData(
ContentType.parse(nextRecord.contentType),
nextRecord.body);
ContentType.parse(nextRecord.contentType);
} catch (Exception ex) {
logger.error("Failed to convert body to string", ex);
status = CrawlerDocumentStatus.BAD_CHARSET;
@@ -147,7 +145,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
status.toString(),
"",
nextRecord.headers,
bodyString,
nextRecord.body,
// this field isn't actually used, maybe we can skip calculating it?
nextRecord.cookies,
lastModified,

View File

@@ -0,0 +1,181 @@
package nu.marginalia.io.crawldata.format;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.crawldata.*;
import nu.marginalia.slop.SlopCrawlDataRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Deque;
import java.util.NoSuchElementException;
public class SlopSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream {
private static final Logger logger = LoggerFactory.getLogger(SlopSerializableCrawlDataStream.class);
private final SlopCrawlDataRecord.FilteringReader reader;
// Holds the next value. This is not a buffer, but to deal with the fact that
// we sometimes generate multiple SerializableCrawlData records for a single input
private final Deque<SerializableCrawlData> nextQ = new ArrayDeque<>();
private boolean wroteDomainRecord = false;
private final Path path;
public SlopSerializableCrawlDataStream(Path file) throws IOException {
path = file;
reader = new SlopCrawlDataRecord.FilteringReader(file) {
@Override
public boolean filter(String url, int status, String contentType) {
String ctLc = contentType.toLowerCase();
if (ctLc.startsWith("text/"))
return true;
else if (ctLc.startsWith("x-marginalia/"))
return true;
return false;
}
};
}
@Override
public Path path() {
return path;
}
public static int sizeHint(Path path) {
// Only calculate size hint for large files
// (the reason we calculate them in the first place is to assess whether it is large
// because it has many documents, or because it is a small number of large documents)
try {
if (Files.size(path) > 10_000_000) {
return SlopCrawlDataRecord.countGoodStatusCodes(path);
}
} catch (IOException e) {
// suppressed
}
return 0;
}
@Override
public boolean hasNext() {
try {
while (reader.hasRemaining() && nextQ.isEmpty()) {
try {
var nextRecord = reader.get();
if (!wroteDomainRecord) {
createDomainRecord(nextRecord);
wroteDomainRecord = true;
}
createDocumentRecord(nextRecord);
} catch (Exception ex) {
logger.error("Failed to create document record", ex);
}
}
return !nextQ.isEmpty();
}
catch (IOException ex) {
return false;
}
}
private void createDomainRecord(SlopCrawlDataRecord parquetRecord) throws URISyntaxException {
CrawlerDomainStatus status = CrawlerDomainStatus.OK;
String statusReason = "";
String redirectDomain = null;
// The advisory content types are used to signal various states of the crawl
// that are not actual crawled documents.
switch (parquetRecord.contentType()) {
case "x-marginalia/advisory;state=redirect" -> {
EdgeUrl crawledUrl = new EdgeUrl(parquetRecord.url());
redirectDomain = crawledUrl.getDomain().toString();
status = CrawlerDomainStatus.REDIRECT;
}
case "x-marginalia/advisory;state=blocked" -> {
status = CrawlerDomainStatus.BLOCKED;
}
case "x-marginalia/advisory;state=error" -> {
status = CrawlerDomainStatus.ERROR;
statusReason = new String(parquetRecord.body());
}
}
nextQ.add(new CrawledDomain(
parquetRecord.domain(),
redirectDomain,
status.toString(),
statusReason,
parquetRecord.ip(),
new ArrayList<>(),
new ArrayList<>()
));
}
private void createDocumentRecord(SlopCrawlDataRecord nextRecord) {
CrawlerDocumentStatus status = CrawlerDocumentStatus.OK;
if (nextRecord.contentType().startsWith("x-marginalia/advisory;state=content-type-failed-probe")) {
status = CrawlerDocumentStatus.BAD_CONTENT_TYPE;
}
else if (nextRecord.contentType().startsWith("x-marginalia/advisory;state=robots-txt-skipped")) {
status = CrawlerDocumentStatus.ROBOTS_TXT;
}
else if (nextRecord.contentType().startsWith("x-marginalia/advisory")) {
// we don't care about the other advisory content types here
return;
}
else if (nextRecord.body() != null) {
try {
ContentType.parse(nextRecord.contentType());
} catch (Exception ex) {
logger.error("Failed to convert body to string", ex);
status = CrawlerDocumentStatus.BAD_CHARSET;
}
}
else {
status = CrawlerDocumentStatus.ERROR;
}
nextQ.add(new CrawledDocument("",
nextRecord.url(),
nextRecord.contentType(),
Instant.ofEpochMilli(nextRecord.timestamp()).toString(),
nextRecord.httpStatus(),
status.toString(),
"",
nextRecord.headers(),
nextRecord.body(),
// this field isn't actually used, maybe we can skip calculating it?
nextRecord.cookies(),
null,
null));
}
public void close() throws IOException {
reader.close();
}
@Override
public SerializableCrawlData next() throws IOException {
if (!hasNext())
throw new NoSuchElementException();
return nextQ.poll();
}
}

View File

@@ -18,7 +18,7 @@ public class DocumentBodyExtractor {
return asBytes(fetchOk);
}
else if (result instanceof HttpFetchResult.Result304ReplacedWithReference retained) {
return new DocumentBodyResult.Ok<>(retained.contentType(), retained.body().getBytes());
return new DocumentBodyResult.Ok<>(retained.contentType(), retained.body());
}
return new DocumentBodyResult.Error<>(CrawlerDocumentStatus.ERROR, "Fetch Result Not Ok");

View File

@@ -1,17 +1,18 @@
package nu.marginalia.model.body;
import nu.marginalia.contenttype.ContentType;
import okhttp3.Headers;
import org.jetbrains.annotations.Nullable;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.netpreserve.jwarc.MessageHeaders;
import org.netpreserve.jwarc.WarcResponse;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.InetAddress;
import java.net.URI;
import java.net.http.HttpHeaders;
import java.util.Arrays;
import java.util.Optional;
/* FIXME: This interface has a very unfortunate name that is not very descriptive.
@@ -56,42 +57,32 @@ public sealed interface HttpFetchResult {
*/
record ResultOk(URI uri,
int statusCode,
Headers headers,
HttpHeaders headers,
String ipAddress,
byte[] bytesRaw,
byte[] bytesRaw, // raw data for the entire response including headers
int bytesStart,
int bytesLength
) implements HttpFetchResult {
public ResultOk(URI uri, int status, MessageHeaders headers, String ipAddress, byte[] bytes, int bytesStart, int length) {
this(uri, status, HttpHeaders.of(headers.map(), (k,v) -> true), ipAddress, bytes, bytesStart, length);
}
public boolean isOk() {
return statusCode >= 200 && statusCode < 300;
}
public ResultOk(URI uri,
int statusCode,
MessageHeaders headers,
String ipAddress,
byte[] bytesRaw,
int bytesStart,
int bytesLength) {
this(uri, statusCode, convertHeaders(headers), ipAddress, bytesRaw, bytesStart, bytesLength);
}
private static Headers convertHeaders(MessageHeaders headers) {
var ret = new Headers.Builder();
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
ret.add(header.getKey(), value);
}
}
return ret.build();
}
public InputStream getInputStream() {
return new ByteArrayInputStream(bytesRaw, bytesStart, bytesLength);
}
public Optional<Document> parseDocument() throws IOException {
/** Copy the byte range corresponding to the payload of the response,
Warning: Copies the data, use getInputStream() for zero copy access */
public byte[] getBodyBytes() {
return Arrays.copyOfRange(bytesRaw, bytesStart, bytesStart + bytesLength);
}
public Optional<Document> parseDocument() {
return DocumentBodyExtractor.asString(this).flatMapOpt((contentType, body) -> {
if (contentType.is("text/html")) {
return Optional.of(Jsoup.parse(body));
@@ -102,8 +93,9 @@ public sealed interface HttpFetchResult {
});
}
@Nullable
public String header(String name) {
return headers.get(name);
return headers.firstValue(name).orElse(null);
}
}
@@ -114,20 +106,10 @@ public sealed interface HttpFetchResult {
*
* @see Result304Raw for the case where the document has not yet been replaced with the reference data.
*/
record Result304ReplacedWithReference(String url, ContentType contentType, String body) implements HttpFetchResult {
record Result304ReplacedWithReference(String url, ContentType contentType, byte[] body) implements HttpFetchResult {
public boolean isOk() {
return true;
}
public Optional<Document> parseDocument() {
try {
return Optional.of(Jsoup.parse(body));
}
catch (Exception ex) {
return Optional.empty();
}
}
}
/** Fetching resulted in an exception */

View File

@@ -1,8 +1,16 @@
package nu.marginalia.model.crawldata;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.contenttype.DocumentBodyToString;
import nu.marginalia.model.EdgeUrl;
import org.apache.commons.lang3.StringUtils;
import org.jetbrains.annotations.Nullable;
import org.jsoup.nodes.Document;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Objects;
public final class CrawledDocument implements SerializableCrawlData {
public String crawlId;
@@ -19,8 +27,52 @@ public final class CrawledDocument implements SerializableCrawlData {
@Nullable
public String headers;
public String documentBody;
public String documentBody() {
return DocumentBodyToString.getStringData(
ContentType.parse(contentType),
documentBodyBytes);
}
/** Attempt to parse the first sampleSize bytes of the document body into a string */
public String documentBody(int sampleSize) {
if (sampleSize >= documentBodyBytes.length) {
return documentBody();
}
// Truncating the string at an unlucky point *may* lead to a parsing error
// ... so we try again with a longer length
for (int i = 0; i <= 3 && sampleSize + i < documentBodyBytes.length; i++) {
try {
byte[] bytes = new byte[sampleSize + i];
System.arraycopy(documentBodyBytes, 0, bytes, 0, bytes.length);
return DocumentBodyToString.getStringData(
ContentType.parse(contentType),
bytes);
}
catch (RuntimeException ex) {
// Try again with i + 1
}
}
throw new IllegalArgumentException("Failed to parse substring");
}
public Document parseBody() throws IOException {
// Prevent stalls from parsing excessively large documents
return DocumentBodyToString.getParsedData(
ContentType.parse(contentType),
documentBodyBytes,
200_000,
url);
}
public boolean hasBody() {
return documentBodyBytes.length > 0;
}
public byte[] documentBodyBytes;
/**
* This is not guaranteed to be set in all versions of the format,
* information may come in CrawledDomain instead
@@ -30,7 +82,7 @@ public final class CrawledDocument implements SerializableCrawlData {
public String lastModifiedMaybe;
public String etagMaybe;
public CrawledDocument(String crawlId, String url, String contentType, String timestamp, int httpStatus, String crawlerStatus, String crawlerStatusDesc, @Nullable String headers, String documentBody, Boolean hasCookies, String lastModifiedMaybe, String etagMaybe) {
public CrawledDocument(String crawlId, String url, String contentType, String timestamp, int httpStatus, String crawlerStatus, String crawlerStatusDesc, @Nullable String headers, byte[] documentBodyBytes, Boolean hasCookies, String lastModifiedMaybe, String etagMaybe) {
this.crawlId = crawlId;
this.url = url;
this.contentType = contentType;
@@ -39,7 +91,7 @@ public final class CrawledDocument implements SerializableCrawlData {
this.crawlerStatus = crawlerStatus;
this.crawlerStatusDesc = crawlerStatusDesc;
this.headers = headers;
this.documentBody = documentBody;
this.documentBodyBytes = Objects.requireNonNullElse(documentBodyBytes, new byte[] {});
this.hasCookies = hasCookies;
this.lastModifiedMaybe = lastModifiedMaybe;
this.etagMaybe = etagMaybe;
@@ -106,7 +158,7 @@ public final class CrawledDocument implements SerializableCrawlData {
}
public String toString() {
return "CrawledDocument(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + this.documentBody + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
return "CrawledDocument(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + documentBody() + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
}
public static class CrawledDocumentBuilder {
@@ -118,7 +170,7 @@ public final class CrawledDocument implements SerializableCrawlData {
private String crawlerStatus;
private String crawlerStatusDesc;
private @Nullable String headers;
private String documentBody;
private byte[] documentBodyBytes = new byte[0];
private String recrawlState;
private Boolean hasCookies;
private String lastModifiedMaybe;
@@ -168,10 +220,13 @@ public final class CrawledDocument implements SerializableCrawlData {
}
public CrawledDocumentBuilder documentBody(String documentBody) {
this.documentBody = documentBody;
this.documentBodyBytes = documentBody.getBytes(StandardCharsets.UTF_8);
return this;
}
public CrawledDocumentBuilder documentBodyBytes(byte[] documentBodyBytes) {
this.documentBodyBytes = documentBodyBytes;
return this;
}
@Deprecated
public CrawledDocumentBuilder recrawlState(String recrawlState) {
this.recrawlState = recrawlState;
@@ -194,11 +249,11 @@ public final class CrawledDocument implements SerializableCrawlData {
}
public CrawledDocument build() {
return new CrawledDocument(this.crawlId, this.url, this.contentType, this.timestamp, this.httpStatus, this.crawlerStatus, this.crawlerStatusDesc, this.headers, this.documentBody, this.hasCookies, this.lastModifiedMaybe, this.etagMaybe);
return new CrawledDocument(this.crawlId, this.url, this.contentType, this.timestamp, this.httpStatus, this.crawlerStatus, this.crawlerStatusDesc, this.headers, this.documentBodyBytes, this.hasCookies, this.lastModifiedMaybe, this.etagMaybe);
}
public String toString() {
return "CrawledDocument.CrawledDocumentBuilder(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + this.documentBody + ", recrawlState=" + this.recrawlState + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
return "CrawledDocument.CrawledDocumentBuilder(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBodyBytes=" + Arrays.toString(this.documentBodyBytes) + ", recrawlState=" + this.recrawlState + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
}
}
}

View File

@@ -165,27 +165,28 @@ public class CrawledDocumentParquetRecordFileWriter implements AutoCloseable {
contentType = "";
}
String headersStr = null;
StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers) {
headersStrBuilder.add(header.getFirst() + ": " + header.getSecond());
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
headersStrBuilder.add(header.getKey() + ": " + value);
}
headersStr = headersStrBuilder.toString();
}
String headersStr = headersStrBuilder.toString();
write(new CrawledDocumentParquetRecord(
domain,
response.target(),
fetchOk.ipAddress(),
WarcXCookieInformationHeader.hasCookies(response),
headers.firstValue("X-Has-Cookies").orElse("0").equals("1"),
fetchOk.statusCode(),
response.date(),
contentType,
bodyBytes,
headersStr,
headers.get("ETag"),
headers.get("Last-Modified"))
);
headers.firstValue("ETag").orElse(null),
headers.firstValue("Last-Modified").orElse(null)
));
}

View File

@@ -0,0 +1,522 @@
package nu.marginalia.slop;
import nu.marginalia.ContentTypes;
import nu.marginalia.UserAgent;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.DocumentBodyResult;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecord;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileReader;
import nu.marginalia.slop.column.array.ByteArrayColumn;
import nu.marginalia.slop.column.primitive.ByteColumn;
import nu.marginalia.slop.column.primitive.LongColumn;
import nu.marginalia.slop.column.primitive.ShortColumn;
import nu.marginalia.slop.column.string.EnumColumn;
import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.desc.StorageType;
import nu.marginalia.slop.storage.LargeItem;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.StringUtils;
import org.netpreserve.jwarc.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URI;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.List;
import java.util.Objects;
import java.util.StringJoiner;
public record SlopCrawlDataRecord(String domain,
String url,
String ip,
boolean cookies,
int httpStatus,
long timestamp,
String contentType,
byte[] body,
String headers)
{
private static final EnumColumn domainColumn = new EnumColumn("domain", StandardCharsets.UTF_8, StorageType.ZSTD);
private static final StringColumn urlColumn = new StringColumn("url", StandardCharsets.UTF_8, StorageType.ZSTD);
private static final StringColumn ipColumn = new StringColumn("ip", StandardCharsets.ISO_8859_1, StorageType.ZSTD);
private static final ByteColumn cookiesColumn = new ByteColumn("cookies");
private static final ShortColumn statusColumn = new ShortColumn("httpStatus");
private static final LongColumn timestampColumn = new LongColumn("timestamp");
private static final EnumColumn contentTypeColumn = new EnumColumn("contentType", StandardCharsets.UTF_8);
private static final ByteArrayColumn bodyColumn = new ByteArrayColumn("body", StorageType.ZSTD);
private static final StringColumn headerColumn = new StringColumn("header", StandardCharsets.UTF_8, StorageType.ZSTD);
public SlopCrawlDataRecord(CrawledDocumentParquetRecord parquetRecord) {
this(parquetRecord.domain,
parquetRecord.url,
parquetRecord.ip,
parquetRecord.cookies,
parquetRecord.httpStatus,
parquetRecord.timestamp.toEpochMilli(),
parquetRecord.contentType,
parquetRecord.body,
parquetRecord.headers
);
}
private static SlopCrawlDataRecord forDomainRedirect(String domain, Instant date, String redirectDomain) {
return new SlopCrawlDataRecord(domain,
"https://" + redirectDomain + "/",
"",
false,
0,
date.toEpochMilli(),
"x-marginalia/advisory;state=redirect",
new byte[0],
""
);
}
private static SlopCrawlDataRecord forDomainError(String domain, Instant date, String ip, String errorStatus) {
return new SlopCrawlDataRecord(domain,
"https://" + domain + "/",
ip,
false,
0,
date.toEpochMilli(),
"x-marginalia/advisory;state=error",
errorStatus.getBytes(),
""
);
}
private static SlopCrawlDataRecord forDocError(String domain, Instant date, String url, String errorStatus) {
return new SlopCrawlDataRecord(domain,
url,
"",
false,
0,
date.toEpochMilli(),
errorStatus,
new byte[0],
""
);
}
public static void convertFromParquet(Path parquetInput, Path slopOutput) throws IOException {
Path tempDir = Files.createTempDirectory(slopOutput.getParent(), "conversion");
try (var writer = new Writer(tempDir);
var stream = CrawledDocumentParquetRecordFileReader.stream(parquetInput))
{
stream.forEach(
parquetRecord -> {
try {
writer.write(new SlopCrawlDataRecord(parquetRecord));
} catch (IOException e) {
throw new RuntimeException(e);
}
});
}
catch (IOException ex) {
FileUtils.deleteDirectory(tempDir.toFile());
throw ex;
}
try {
SlopTablePacker.packToSlopZip(tempDir, slopOutput);
FileUtils.deleteDirectory(tempDir.toFile());
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
}
private static final Logger logger = LoggerFactory.getLogger(SlopCrawlDataRecord.class);
public static void convertWarc(String domain,
UserAgent userAgent,
Path warcInputFile,
Path slopOutputFile) throws IOException {
Path tempDir = Files.createTempDirectory(slopOutputFile.getParent(), "slop-"+domain);
try (var warcReader = new WarcReader(warcInputFile);
var slopWriter = new SlopCrawlDataRecord.Writer(tempDir)
) {
WarcXResponseReference.register(warcReader);
WarcXEntityRefused.register(warcReader);
String uaString = userAgent.uaString();
for (var record : warcReader) {
try {
if (record instanceof WarcResponse response) {
// this also captures WarcXResponseReference, which inherits from WarcResponse
// and is used to store old responses from previous crawls; in this part of the logic
// we treat them the same as a normal response
if (!filterResponse(uaString, response)) {
continue;
}
slopWriter.write(domain, response);
} else if (record instanceof WarcXEntityRefused refused) {
slopWriter.write(domain, refused);
} else if (record instanceof Warcinfo warcinfo) {
slopWriter.write(warcinfo);
}
}
catch (Exception ex) {
logger.error("Failed to convert WARC record to Parquet", ex);
}
}
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
try {
SlopTablePacker.packToSlopZip(tempDir, slopOutputFile);
FileUtils.deleteDirectory(tempDir.toFile());
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
}
/** Return true if the WarcResponse should be excluded from conversion */
private static boolean filterResponse(String uaString, WarcResponse response) throws IOException {
// We don't want to store robots.txt files, as they are not
// interesting for the analysis we want to do. This is important
// since txt-files in general are interesting, and we don't want to
// exclude them as a class.
if (response.targetURI().getPath().equals("/robots.txt")) {
return false;
}
var headers = response.http().headers();
var robotsTags = headers.all("X-Robots-Tag");
if (!isXRobotsTagsPermitted(robotsTags, uaString)) {
return false;
}
// Strip out responses with content types we aren't interested in
// (though ideally we wouldn't download these at all)
String contentType = headers.first("Content-Type").orElse("text/plain").toLowerCase();
if (!ContentTypes.isAccepted(contentType)) {
return false;
}
return true;
}
/** Check X-Robots-Tag header tag to see if we are allowed to index this page.
* <p>
* Reference: <a href="https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag">https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag</a>
*
* @param xRobotsHeaderTags List of X-Robots-Tag values
* @param userAgent User agent string
* @return true if we are allowed to index this page
*/
// Visible for tests
public static boolean isXRobotsTagsPermitted(List<String> xRobotsHeaderTags, String userAgent) {
boolean isPermittedGeneral = true;
boolean isPermittedMarginalia = false;
boolean isForbiddenMarginalia = false;
for (String header : xRobotsHeaderTags) {
if (header.indexOf(':') >= 0) {
String[] parts = StringUtils.split(header, ":", 2);
if (parts.length < 2)
continue;
// Is this relevant to us?
if (!Objects.equals(parts[0].trim(), userAgent))
continue;
if (parts[1].contains("noindex"))
isForbiddenMarginalia = true;
else if (parts[1].contains("none"))
isForbiddenMarginalia = true;
else if (parts[1].contains("all"))
isPermittedMarginalia = true;
}
else {
if (header.contains("noindex"))
isPermittedGeneral = false;
if (header.contains("none"))
isPermittedGeneral = false;
}
}
if (isPermittedMarginalia)
return true;
if (isForbiddenMarginalia)
return false;
return isPermittedGeneral;
}
public static int countGoodStatusCodes(Path path) throws IOException {
int cnt = 0;
try (var table = new SlopTable(path)) {
ShortColumn.Reader statusReader = statusColumn.open(table);
while (statusReader.hasRemaining()) {
if (statusReader.get() == 200) {
cnt++;
}
}
}
return cnt;
}
public static class Writer extends SlopTable {
private final EnumColumn.Writer domainColumnWriter;
private final StringColumn.Writer urlColumnWriter;
private final StringColumn.Writer ipColumnWriter;
private final ByteColumn.Writer cookiesColumnWriter;
private final ShortColumn.Writer statusColumnWriter;
private final LongColumn.Writer timestampColumnWriter;
private final EnumColumn.Writer contentTypeColumnWriter;
private final ByteArrayColumn.Writer bodyColumnWriter;
private final StringColumn.Writer headerColumnWriter;
public Writer(Path path) throws IOException {
super(path);
domainColumnWriter = domainColumn.create(this);
urlColumnWriter = urlColumn.create(this);
ipColumnWriter = ipColumn.create(this);
cookiesColumnWriter = cookiesColumn.create(this);
statusColumnWriter = statusColumn.create(this);
timestampColumnWriter = timestampColumn.create(this);
contentTypeColumnWriter = contentTypeColumn.create(this);
bodyColumnWriter = bodyColumn.create(this);
headerColumnWriter = headerColumn.create(this);
}
public void write(SlopCrawlDataRecord record) throws IOException {
domainColumnWriter.put(record.domain);
urlColumnWriter.put(record.url);
ipColumnWriter.put(record.ip);
cookiesColumnWriter.put(record.cookies ? (byte) 1 : (byte) 0);
statusColumnWriter.put((short) record.httpStatus);
timestampColumnWriter.put(record.timestamp);
contentTypeColumnWriter.put(record.contentType);
bodyColumnWriter.put(record.body);
headerColumnWriter.put(record.headers);
}
public void write(String domain, WarcResponse response) throws IOException {
HttpFetchResult result = HttpFetchResult.importWarc(response);
if (!(result instanceof HttpFetchResult.ResultOk fetchOk)) {
return;
}
byte[] bodyBytes;
String contentType;
var body = DocumentBodyExtractor.asBytes(result);
var headers = fetchOk.headers();
if (body instanceof DocumentBodyResult.Ok<byte[]> bodyOk) {
bodyBytes = bodyOk.body();
contentType = bodyOk.contentType().toString();
}
else {
bodyBytes = new byte[0];
contentType = "";
}
String headersStr;
StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
headersStrBuilder.add(header.getKey() + ": " + value);
}
}
headersStr = headersStrBuilder.toString();
write(new SlopCrawlDataRecord(
domain,
response.target(),
fetchOk.ipAddress(),
"1".equals(headers.firstValue("X-Cookies").orElse("0")),
fetchOk.statusCode(),
response.date().toEpochMilli(),
contentType,
bodyBytes,
headersStr
)
);
}
private void write(String domain, WarcXEntityRefused refused) throws IOException {
URI profile = refused.profile();
String meta;
if (profile.equals(WarcXEntityRefused.documentRobotsTxtSkippedURN)) {
meta = "x-marginalia/advisory;state=robots-txt-skipped";
}
else if (profile.equals(WarcXEntityRefused.documentBadContentTypeURN)) {
meta = "x-marginalia/advisory;state=content-type-failed-probe";
}
else if (profile.equals(WarcXEntityRefused.documentProbeTimeout)) {
meta = "x-marginalia/advisory;state=timeout-probe";
}
else if (profile.equals(WarcXEntityRefused.documentUnspecifiedError)) {
meta = "x-marginalia/advisory;state=doc-error";
}
else {
meta = "x-marginalia/advisory;state=unknown";
}
write(forDocError(domain, refused.date(), refused.target(), meta));
}
private void write(Warcinfo warcinfo) throws IOException {
String selfDomain = warcinfo.fields().first("domain").orElse("");
String ip = warcinfo.fields().first("ip").orElse("");
String probeStatus = warcinfo.fields().first("X-WARC-Probe-Status").orElse("");
if (probeStatus.startsWith("REDIRECT")) {
String redirectDomain = probeStatus.substring("REDIRECT;".length());
write(forDomainRedirect(selfDomain, warcinfo.date(), redirectDomain));
}
else if (!"OK".equals(probeStatus)) {
write(forDomainError(selfDomain, warcinfo.date(), ip, probeStatus));
}
}
}
public static class Reader extends SlopTable {
private final EnumColumn.Reader domainColumnReader;
private final StringColumn.Reader urlColumnReader;
private final StringColumn.Reader ipColumnReader;
private final ByteColumn.Reader cookiesColumnReader;
private final ShortColumn.Reader statusColumnReader;
private final LongColumn.Reader timestampColumnReader;
private final EnumColumn.Reader contentTypeColumnReader;
private final ByteArrayColumn.Reader bodyColumnReader;
private final StringColumn.Reader headerColumnReader;
public Reader(Path path) throws IOException {
super(path);
domainColumnReader = domainColumn.open(this);
urlColumnReader = urlColumn.open(this);
ipColumnReader = ipColumn.open(this);
cookiesColumnReader = cookiesColumn.open(this);
statusColumnReader = statusColumn.open(this);
timestampColumnReader = timestampColumn.open(this);
contentTypeColumnReader = contentTypeColumn.open(this);
bodyColumnReader = bodyColumn.open(this);
headerColumnReader = headerColumn.open(this);
}
public SlopCrawlDataRecord get() throws IOException {
return new SlopCrawlDataRecord(
domainColumnReader.get(),
urlColumnReader.get(),
ipColumnReader.get(),
cookiesColumnReader.get() == 1,
statusColumnReader.get(),
timestampColumnReader.get(),
contentTypeColumnReader.get(),
bodyColumnReader.get(),
headerColumnReader.get()
);
}
public boolean hasRemaining() throws IOException {
return domainColumnReader.hasRemaining();
}
}
public abstract static class FilteringReader extends SlopTable {
private final EnumColumn.Reader domainColumnReader;
private final StringColumn.Reader urlColumnReader;
private final StringColumn.Reader ipColumnReader;
private final ByteColumn.Reader cookiesColumnReader;
private final ShortColumn.Reader statusColumnReader;
private final LongColumn.Reader timestampColumnReader;
private final EnumColumn.Reader contentTypeColumnReader;
private final ByteArrayColumn.Reader bodyColumnReader;
private final StringColumn.Reader headerColumnReader;
private SlopCrawlDataRecord next = null;
public FilteringReader(Path path) throws IOException {
super(path);
domainColumnReader = domainColumn.open(this);
urlColumnReader = urlColumn.open(this);
ipColumnReader = ipColumn.open(this);
cookiesColumnReader = cookiesColumn.open(this);
statusColumnReader = statusColumn.open(this);
timestampColumnReader = timestampColumn.open(this);
contentTypeColumnReader = contentTypeColumn.open(this);
bodyColumnReader = bodyColumn.open(this);
headerColumnReader = headerColumn.open(this);
}
public abstract boolean filter(String url, int status, String contentType);
public SlopCrawlDataRecord get() throws IOException {
if (next == null) {
if (!hasRemaining()) {
throw new IllegalStateException("No more values remaining");
}
}
var val = next;
next = null;
return val;
}
public boolean hasRemaining() throws IOException {
if (next != null)
return true;
while (domainColumnReader.hasRemaining()) {
String domain = domainColumnReader.get();
String url = urlColumnReader.get();
String ip = ipColumnReader.get();
boolean cookies = cookiesColumnReader.get() == 1;
int status = statusColumnReader.get();
long timestamp = timestampColumnReader.get();
String contentType = contentTypeColumnReader.get();
LargeItem<byte[]> body = bodyColumnReader.getLarge();
LargeItem<String> headers = headerColumnReader.getLarge();
if (filter(url, status, contentType)) {
next = new SlopCrawlDataRecord(
domain, url, ip, cookies, status, timestamp, contentType, body.get(), headers.get()
);
return true;
}
else {
body.close();
headers.close();
}
}
return false;
}
}
}

View File

@@ -1,35 +0,0 @@
package org.netpreserve.jwarc;
import okhttp3.HttpUrl;
import okhttp3.OkHttpClient;
/** Encapsulates out-of-band information about whether a website uses cookies,
* using a non-standard WARC header "X-Has-Cookies".
*/
public class WarcXCookieInformationHeader {
private boolean hasCookies = false;
private static final String headerName = "X-Has-Cookies";
public void update(OkHttpClient client, HttpUrl url) {
if (!hasCookies) {
hasCookies = !client.cookieJar().loadForRequest(url).isEmpty();
}
}
public boolean hasCookies() {
return hasCookies;
}
public void paint(WarcResponse.Builder builder) {
builder.addHeader(headerName, hasCookies ? "1" : "0");
}
public void paint(WarcXResponseReference.Builder builder) {
builder.addHeader(headerName, hasCookies ? "1" : "0");
}
public static boolean hasCookies(WarcRecord record) {
return record.headers().contains(headerName, "1");
}
}

View File

@@ -80,7 +80,7 @@ class CrawledDocumentParquetRecordFileWriterTest {
var document = (CrawledDocument) secondItem;
assertEquals("https://www.marginalia.nu/", document.url);
assertEquals("text/html", document.contentType);
assertEquals("hello world", document.documentBody);
assertEquals("hello world", document.documentBody());
assertEquals(200, document.httpStatus);
}
@@ -103,7 +103,7 @@ class CrawledDocumentParquetRecordFileWriterTest {
System.out.println(doc.url);
System.out.println(doc.contentType);
System.out.println(doc.httpStatus);
System.out.println(doc.documentBody.length());
System.out.println(doc.documentBody().length());
}
}
} catch (IOException e) {

View File

@@ -10,7 +10,7 @@ import java.nio.file.Path;
import java.sql.SQLException;
import java.time.Instant;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.*;
class DomainStateDbTest {
@@ -26,7 +26,7 @@ class DomainStateDbTest {
}
@Test
public void testSunnyDay() throws SQLException {
public void testSummaryRecord() throws SQLException {
try (var db = new DomainStateDb(tempFile)) {
var allFields = new DomainStateDb.SummaryRecord(
"all.marginalia.nu",
@@ -63,4 +63,21 @@ class DomainStateDbTest {
}
}
@Test
public void testFavicon() throws SQLException {
try (var db = new DomainStateDb(tempFile)) {
db.saveIcon("www.marginalia.nu", new DomainStateDb.FaviconRecord("text/plain", "hello world".getBytes()));
var maybeData = db.getIcon("www.marginalia.nu");
assertTrue(maybeData.isPresent());
var actualData = maybeData.get();
assertEquals("text/plain", actualData.contentType());
assertArrayEquals("hello world".getBytes(), actualData.imageData());
maybeData = db.getIcon("foobar");
assertTrue(maybeData.isEmpty());
}
}
}

View File

@@ -1,11 +1,9 @@
package nu.marginalia.crawl.retreival;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@@ -15,6 +13,8 @@ import org.netpreserve.jwarc.WarcResponse;
import java.io.IOException;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.nio.file.Files;
import java.nio.file.Path;
import java.security.NoSuchAlgorithmException;
@@ -27,11 +27,10 @@ import static org.junit.jupiter.api.Assertions.fail;
class CrawlerWarcResynchronizerTest {
Path fileName;
Path outputFile;
OkHttpClient httpClient;
HttpClient httpClient;
@BeforeEach
public void setUp() throws Exception {
httpClient = new OkHttpClient.Builder()
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor())
httpClient = HttpClient.newBuilder()
.build();
fileName = Files.createTempFile("test", ".warc.gz");
@@ -46,7 +45,7 @@ class CrawlerWarcResynchronizerTest {
@Test
void run() throws IOException, URISyntaxException {
try (var oldRecorder = new WarcRecorder(fileName)) {
try (var oldRecorder = new WarcRecorder(fileName, new Cookies())) {
fetchUrl(oldRecorder, "https://www.marginalia.nu/");
fetchUrl(oldRecorder, "https://www.marginalia.nu/log/");
fetchUrl(oldRecorder, "https://www.marginalia.nu/feed/");
@@ -56,7 +55,7 @@ class CrawlerWarcResynchronizerTest {
var crawlFrontier = new DomainCrawlFrontier(new EdgeDomain("www.marginalia.nu"), List.of(), 100);
try (var newRecorder = new WarcRecorder(outputFile)) {
try (var newRecorder = new WarcRecorder(outputFile, new Cookies())) {
new CrawlerWarcResynchronizer(crawlFrontier, newRecorder).run(fileName);
}
@@ -79,10 +78,11 @@ class CrawlerWarcResynchronizerTest {
}
void fetchUrl(WarcRecorder recorder, String url) throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
var req = new Request.Builder().url(url)
.addHeader("User-agent", "test.marginalia.nu")
.addHeader("Accept-Encoding", "gzip")
.get().build();
var req = HttpRequest.newBuilder()
.uri(new java.net.URI(url))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build();
recorder.fetch(httpClient, req);
}
}

Some files were not shown because too many files have changed in this diff Show More