1
1
mirror of https://github.com/MarginaliaSearch/MarginaliaSearch.git synced 2025-10-06 07:32:38 +02:00

Compare commits

...

59 Commits

Author SHA1 Message Date
Viktor Lofgren
a2b076f9be (converter) Add progress tracking for big domains in converter 2025-01-26 18:03:59 +01:00
Viktor Lofgren
c8b0a32c0f (crawler) Reduce long retention of CrawlDataReference objects and their associated SerializableCrawlDataStreams 2025-01-26 15:40:17 +01:00
Viktor Lofgren
f0d74aa3bb (converter) Fix close() ordering to prevent converter crash 2025-01-26 14:47:36 +01:00
Viktor Lofgren
74a1f100f4 (converter) Refactor to remove CrawledDomainReader and move its functionality into SerializableCrawlDataStream 2025-01-26 14:46:50 +01:00
Viktor Lofgren
eb049658e4 (converter) Add truncation att the parser step to prevent the converter from spending too much time on excessively large documents
Refactor to do this without introducing additional copies
2025-01-26 14:28:53 +01:00
Viktor Lofgren
db138b2a6f (converter) Add truncation att the parser step to prevent the converter from spending too much time on exessively large documents 2025-01-26 14:25:57 +01:00
Viktor Lofgren
1673fc284c (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:21:46 +01:00
Viktor Lofgren
503ea57d5b (converter) Reduce lock contention in converter by separating the processing of full and simple-track domains 2025-01-26 13:18:14 +01:00
Viktor Lofgren
18ca926c7f (converter) Truncate excessively long strings in SentenceExtractor, malformed data was effectively DOS:ing the converter 2025-01-26 12:52:54 +01:00
Viktor Lofgren
db99242db2 (converter) Adding some logging around the simple processing track to investigate an issue with the converter stalling 2025-01-26 12:02:00 +01:00
Viktor Lofgren
2b9d2985ba (doc) Update readme with up-to-date install instructions. 2025-01-24 18:51:41 +01:00
Viktor Lofgren
eeb6ecd711 (search) Make it clearer that the affiliate marker applies to the result, and not the search engine's relation to the result. 2025-01-24 18:50:00 +01:00
Viktor Lofgren
1f58aeadbf (build) Upgrade JIB 2025-01-24 18:49:28 +01:00
Viktor Lofgren
3d68be64da (crawler) Add default CT when it's missing for icons 2025-01-22 13:55:47 +01:00
Viktor Lofgren
668f3b16ef (search) Redirect ^/site/$ to /site 2025-01-22 13:35:18 +01:00
Viktor Lofgren
98a340a0d1 (crawler) Add favicon data to domain state db in its own table 2025-01-22 11:41:20 +01:00
Viktor Lofgren
8862100f7e (crawler) Improve logging and error handling 2025-01-21 21:44:21 +01:00
Viktor Lofgren
274941f6de (crawler) Smarter parquet->slop crawl data migration 2025-01-21 21:26:12 +01:00
Viktor Lofgren
abec83582d Fix refactoring gore 2025-01-21 15:08:04 +01:00
Viktor Lofgren
569520c9b6 (index) Add manual adjustments for rankings based on domain 2025-01-21 15:07:43 +01:00
Viktor Lofgren
088310e998 (converter) Improve simple processing performance
There was a regression introduced in the recent slop migration changes in  the performance of the simple conversion track.  This reverts the issue.
2025-01-21 14:13:33 +01:00
Viktor
270cab874b Merge pull request #134 from MarginaliaSearch/slop-crawl-data-spike
Store crawl data in slop instead of parquet
2025-01-21 13:34:22 +01:00
Viktor Lofgren
4c74e280d3 (crawler) Fix urlencoding in sitemap fetcher 2025-01-21 13:33:35 +01:00
Viktor Lofgren
5b347e17ac (crawler) Automatically migrate to slop from parquet when crawling 2025-01-21 13:33:14 +01:00
Viktor Lofgren
55d6ab933f Merge branch 'master' into slop-crawl-data-spike 2025-01-21 13:32:58 +01:00
Viktor Lofgren
43b74e9706 (crawler) Fix exception handler and resource leak in WarcRecorder 2025-01-20 23:45:28 +01:00
Viktor Lofgren
579a115243 (crawler) Reduce log spam from error handling in new sitemap fetcher 2025-01-20 23:17:13 +01:00
Viktor
2c67f50a43 Merge pull request #150 from MarginaliaSearch/httpclient-in-crawler
Reduce the use of 3rd party code in the crawler
2025-01-20 19:35:30 +01:00
Viktor Lofgren
78a958e2b0 (crawler) Fix broken test that started failing after the search engine moved to a new domain 2025-01-20 18:52:14 +01:00
Viktor Lofgren
4e939389b2 (crawler) New Jsoup based sitemap parser 2025-01-20 14:37:44 +01:00
Viktor Lofgren
e67a9bdb91 (crawler) Migrate away from using OkHttp in the crawler, use Java's HttpClient instead. 2025-01-19 15:07:11 +01:00
Viktor Lofgren
567e4e1237 (crawler) Fast detection and bail-out for crawler traps
Improve logging and exclude robots.txt from this logic.
2025-01-18 15:28:54 +01:00
Viktor Lofgren
4342e42722 (crawler) Fast detection and bail-out for crawler traps
Nephentes has been doing the rounds in social media, adding an easy detection and mitigation mechanism for this type of trap, as sadly not all webmasters set up their robots.txt correctly.  Out of the box crawl limits will also deal with this type of attack, but this fix is faster.
2025-01-17 13:02:57 +01:00
Viktor Lofgren
bc818056e6 (run) Fix templates for mariadb
Apparently the docker image contract changed at some point, and now we should spawn mariadbd and not mysqld; mariadb-admin and not mysqladmin.
2025-01-16 15:27:02 +01:00
Viktor Lofgren
de2feac238 (chore) Upgrade jib from 3.4.3 to 3.4.4 2025-01-16 15:10:45 +01:00
Viktor Lofgren
1e770205a5 (search) Dyslexia fix 2025-01-12 20:40:14 +01:00
Viktor
e44ecd6d69 Merge pull request #149 from MarginaliaSearch/vlofgren-patch-1
Update ROADMAP.md
2025-01-12 20:38:36 +01:00
Viktor
5b93a0e633 Update ROADMAP.md 2025-01-12 20:38:11 +01:00
Viktor
08fb0e5efe Update ROADMAP.md 2025-01-12 20:37:43 +01:00
Viktor
bcf67782ea Update ROADMAP.md 2025-01-12 20:37:09 +01:00
Viktor Lofgren
ef3f175ede (search) Don't clobber the search query URL with default values 2025-01-10 15:57:30 +01:00
Viktor Lofgren
bbe4b5d9fd Revert experimental changes 2025-01-10 15:52:02 +01:00
Viktor Lofgren
c67a635103 (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:44:44 +01:00
Viktor Lofgren
20b24133fb (search, experimental) Add a few debugging tracks to the search UI 2025-01-10 15:34:48 +01:00
Viktor Lofgren
f2567677e8 (index-client) Clean up index client code
Improve error handling.  This should be a relatively rare case, but we don't want one bad index partition to blow up the entire query.
2025-01-10 15:17:07 +01:00
Viktor Lofgren
bc2c2061f2 (index-client) Clean up index client code
This should have the rpc stream reception be performed in parallel in separate threads, rather blocking sequentially in the main thread, hopefully giving a slight performance boost.
2025-01-10 15:14:42 +01:00
Viktor Lofgren
1c7f5a31a5 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:17:29 +01:00
Viktor Lofgren
59a8ea60f7 (search) Further reduce the number of db queries by adding more caching to DbDomainQueries. 2025-01-10 14:15:22 +01:00
Viktor Lofgren
aa9b1244ea (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:56:04 +01:00
Viktor Lofgren
2d17233366 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:53:56 +01:00
Viktor Lofgren
b245cc9f38 (search) Reduce the number of db queries a bit by caching data that doesn't change too often 2025-01-10 13:46:19 +01:00
Viktor Lofgren
6614d05bdf (db) Make db pool size configurable 2025-01-09 20:20:51 +01:00
Viktor Lofgren
55aeb03c4a (feeds) Replace rssreader based parsing with a custom jsoup based rss parser
This solves some issues with the rssreader based parser, which was very picky about the XML being valid.  Jsoup is much more lenient when parsing malformed XML.
2025-01-09 18:29:55 +01:00
Viktor Lofgren
faa589962f (live-capture) Browserless now requires a token 2025-01-09 14:51:11 +01:00
Viktor Lofgren
c7edd6b39f (live-capture) Browserless now requires a token 2025-01-09 14:46:05 +01:00
Viktor Lofgren
47e58a21c6 Refactor documentBody method and ContentType charset handling
Updated the `documentBody` method to improve parsing retries and error handling. Refactored `ContentType` charset processing with cleaner logic, removing redundant handling for unsupported charsets. Also, updated the version of the `slop` library in dependency settings.
2024-12-17 17:11:37 +01:00
Viktor Lofgren
3714104976 Add loader for slop data in converter.
Also alter CrawledDocument to not require String parsing of the underlying byte[] data.  This should reduce the number of large memory allocations quite significantly, hopefully reducing the GC churn a bit.
2024-12-17 15:40:24 +01:00
Viktor Lofgren
f6f036b9b1 Switch to new Slop format for crawl data storage and processing.
Replaces Parquet output and processing with the new Slop-based format. Includes data migration functionality, updates to handling and writing of crawl data, and introduces support for SLOP in domain readers and converters.
2024-12-15 19:34:03 +01:00
Viktor Lofgren
b510b7feb8 Spike for storing crawl data in slop instead of parquet
This seems to reduce RAM overhead to 100s of MB (from ~2 GB), as well as roughly double the read speeds.  On disk size is virtually identical.
2024-12-15 15:49:47 +01:00
120 changed files with 2616 additions and 1594 deletions

View File

@@ -1,4 +1,4 @@
# Roadmap 2024-2025 # Roadmap 2025
This is a roadmap with major features planned for Marginalia Search. This is a roadmap with major features planned for Marginalia Search.
@@ -30,12 +30,6 @@ Retaining the ability to independently crawl the web is still strongly desirable
The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) ) The search engine has a bit of a problem showing spicy content mixed in with the results. It would be desirable to have a way to filter this out. It's likely something like a URL blacklist (e.g. [UT1](https://dsi.ut-capitole.fr/blacklists/index_en.php) )
combined with naive bayesian filter would go a long way, or something more sophisticated...? combined with naive bayesian filter would go a long way, or something more sophisticated...?
## Web Design Overhaul
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
In progress: PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127) -- demo available at https://test.marginalia.nu/
## Additional Language Support ## Additional Language Support
It would be desirable if the search engine supported more languages than English. This is partially about It would be desirable if the search engine supported more languages than English. This is partially about
@@ -62,8 +56,31 @@ filter for any API consumer.
I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this. I've talked to the stract dev and he does not think it's a good idea to mimic their optics language, which is quite ad-hoc, but instead to work together to find some new common description language for this.
## Show favicons next to search results
This is expected from search engines. Basic proof of concept sketch of fetching this data has been done, but the feature is some way from being reality.
## Specialized crawler for github
One of the search engine's biggest limitations right now is that it does not index github at all. A specialized crawler that fetches at least the readme.md would go a long way toward providing search capabilities in this domain.
# Completed # Completed
## Web Design Overhaul (COMPLETED 2025-01)
The design is kinda clunky and hard to maintain, and needlessly outdated-looking.
PR [#127](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/127)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)
## Proper Position Index (COMPLETED 2024-09) ## Proper Position Index (COMPLETED 2024-09)
The search engine uses a fixed width bit mask to indicate word positions. It has the benefit The search engine uses a fixed width bit mask to indicate word positions. It has the benefit
@@ -76,11 +93,3 @@ list, as is the civilized way of doing this.
Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99) Completed with PR [#99](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/99)
## Finalize RSS support (COMPLETED 2024-11)
Marginalia has experimental RSS preview support for a few domains. This works well and
it should be extended to all domains. It would also be interesting to offer search of the
RSS data itself, or use the RSS set to feed a special live index that updates faster than the
main dataset.
Completed with PR [#122](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/122) and PR [#125](https://github.com/MarginaliaSearch/MarginaliaSearch/pull/125)

View File

@@ -5,7 +5,7 @@ plugins {
// This is a workaround for a bug in the Jib plugin that causes it to stall randomly // This is a workaround for a bug in the Jib plugin that causes it to stall randomly
// https://github.com/GoogleContainerTools/jib/issues/3347 // https://github.com/GoogleContainerTools/jib/issues/3347
id 'com.google.cloud.tools.jib' version '3.4.3' apply(false) id 'com.google.cloud.tools.jib' version '3.4.4' apply(false)
} }
group 'marginalia' group 'marginalia'
@@ -47,7 +47,7 @@ ext {
dockerImageBase='container-registry.oracle.com/graalvm/jdk:23' dockerImageBase='container-registry.oracle.com/graalvm/jdk:23'
dockerImageTag='latest' dockerImageTag='latest'
dockerImageRegistry='marginalia' dockerImageRegistry='marginalia'
jibVersion = '3.4.3' jibVersion = '3.4.4'
} }

View File

@@ -20,7 +20,10 @@ public class DbDomainQueries {
private final HikariDataSource dataSource; private final HikariDataSource dataSource;
private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class); private static final Logger logger = LoggerFactory.getLogger(DbDomainQueries.class);
private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build(); private final Cache<EdgeDomain, Integer> domainIdCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<Integer, EdgeDomain> domainNameCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
private final Cache<String, List<DomainWithNode>> siblingsCache = CacheBuilder.newBuilder().maximumSize(10_000).build();
@Inject @Inject
public DbDomainQueries(HikariDataSource dataSource) public DbDomainQueries(HikariDataSource dataSource)
@@ -30,16 +33,21 @@ public class DbDomainQueries {
public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException { public Integer getDomainId(EdgeDomain domain) throws NoSuchElementException {
try (var connection = dataSource.getConnection()) { try {
return domainIdCache.get(domain, () -> { return domainIdCache.get(domain, () -> {
try (var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) { try (var connection = dataSource.getConnection();
var stmt = connection.prepareStatement("SELECT ID FROM EC_DOMAIN WHERE DOMAIN_NAME=?")) {
stmt.setString(1, domain.toString()); stmt.setString(1, domain.toString());
var rsp = stmt.executeQuery(); var rsp = stmt.executeQuery();
if (rsp.next()) { if (rsp.next()) {
return rsp.getInt(1); return rsp.getInt(1);
} }
} }
catch (SQLException ex) {
throw new RuntimeException(ex);
}
throw new NoSuchElementException(); throw new NoSuchElementException();
}); });
} }
@@ -49,9 +57,6 @@ public class DbDomainQueries {
catch (ExecutionException ex) { catch (ExecutionException ex) {
throw new RuntimeException(ex.getCause()); throw new RuntimeException(ex.getCause());
} }
catch (SQLException ex) {
throw new RuntimeException(ex);
}
} }
public OptionalInt tryGetDomainId(EdgeDomain domain) { public OptionalInt tryGetDomainId(EdgeDomain domain) {
@@ -84,47 +89,55 @@ public class DbDomainQueries {
} }
public Optional<EdgeDomain> getDomain(int id) { public Optional<EdgeDomain> getDomain(int id) {
try (var connection = dataSource.getConnection()) {
EdgeDomain existing = domainNameCache.getIfPresent(id);
if (existing != null) {
return Optional.of(existing);
}
try (var connection = dataSource.getConnection()) {
try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) { try (var stmt = connection.prepareStatement("SELECT DOMAIN_NAME FROM EC_DOMAIN WHERE ID=?")) {
stmt.setInt(1, id); stmt.setInt(1, id);
var rsp = stmt.executeQuery(); var rsp = stmt.executeQuery();
if (rsp.next()) { if (rsp.next()) {
return Optional.of(new EdgeDomain(rsp.getString(1))); var val = new EdgeDomain(rsp.getString(1));
domainNameCache.put(id, val);
return Optional.of(val);
} }
return Optional.empty(); return Optional.empty();
} }
} }
catch (UncheckedExecutionException ex) {
throw new RuntimeException(ex.getCause());
}
catch (SQLException ex) { catch (SQLException ex) {
throw new RuntimeException(ex); throw new RuntimeException(ex);
} }
} }
public List<DomainWithNode> otherSubdomains(EdgeDomain domain, int cnt) { public List<DomainWithNode> otherSubdomains(EdgeDomain domain, int cnt) throws ExecutionException {
List<DomainWithNode> ret = new ArrayList<>(); String topDomain = domain.topDomain;
try (var conn = dataSource.getConnection(); return siblingsCache.get(topDomain, () -> {
var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) { List<DomainWithNode> ret = new ArrayList<>();
stmt.setString(1, domain.topDomain);
stmt.setInt(2, cnt);
var rs = stmt.executeQuery(); try (var conn = dataSource.getConnection();
while (rs.next()) { var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, NODE_AFFINITY FROM EC_DOMAIN WHERE DOMAIN_TOP = ? LIMIT ?")) {
var sibling = new EdgeDomain(rs.getString(1)); stmt.setString(1, topDomain);
stmt.setInt(2, cnt);
if (sibling.equals(domain)) var rs = stmt.executeQuery();
continue; while (rs.next()) {
var sibling = new EdgeDomain(rs.getString(1));
ret.add(new DomainWithNode(sibling, rs.getInt(2))); if (sibling.equals(domain))
continue;
ret.add(new DomainWithNode(sibling, rs.getInt(2)));
}
} catch (SQLException e) {
logger.error("Failed to get domain neighbors");
} }
} catch (SQLException e) { return ret;
logger.error("Failed to get domain neighbors"); });
}
return ret;
} }
public record DomainWithNode (EdgeDomain domain, int nodeAffinity) { public record DomainWithNode (EdgeDomain domain, int nodeAffinity) {

View File

@@ -1,118 +0,0 @@
package nu.marginalia.db;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.OptionalInt;
/** Class used in exporting data. This is intended to be used for a brief time
* and then discarded, not kept around as a service.
*/
public class DbDomainStatsExportMultitool implements AutoCloseable {
private final Connection connection;
private final int nodeId;
private final PreparedStatement knownUrlsQuery;
private final PreparedStatement visitedUrlsQuery;
private final PreparedStatement goodUrlsQuery;
private final PreparedStatement domainNameToId;
private final PreparedStatement allDomainsQuery;
private final PreparedStatement crawlQueueDomains;
private final PreparedStatement indexedDomainsQuery;
public DbDomainStatsExportMultitool(HikariDataSource dataSource, int nodeId) throws SQLException {
this.connection = dataSource.getConnection();
this.nodeId = nodeId;
knownUrlsQuery = connection.prepareStatement("""
SELECT KNOWN_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
visitedUrlsQuery = connection.prepareStatement("""
SELECT VISITED_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
goodUrlsQuery = connection.prepareStatement("""
SELECT GOOD_URLS
FROM EC_DOMAIN INNER JOIN DOMAIN_METADATA
ON EC_DOMAIN.ID=DOMAIN_METADATA.ID
WHERE DOMAIN_NAME=?
""");
domainNameToId = connection.prepareStatement("""
SELECT ID
FROM EC_DOMAIN
WHERE DOMAIN_NAME=?
""");
allDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
""");
crawlQueueDomains = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM CRAWL_QUEUE
""");
indexedDomainsQuery = connection.prepareStatement("""
SELECT DOMAIN_NAME
FROM EC_DOMAIN
WHERE INDEXED > 0
""");
}
public OptionalInt getVisitedUrls(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, visitedUrlsQuery);
}
public OptionalInt getDomainId(String domainName) throws SQLException {
return executeNameToIntQuery(domainName, domainNameToId);
}
public List<String> getCrawlQueueDomains() throws SQLException {
return executeListQuery(crawlQueueDomains, 100);
}
public List<String> getAllIndexedDomains() throws SQLException {
return executeListQuery(indexedDomainsQuery, 100_000);
}
private OptionalInt executeNameToIntQuery(String domainName, PreparedStatement statement)
throws SQLException {
statement.setString(1, domainName);
var rs = statement.executeQuery();
if (rs.next()) {
return OptionalInt.of(rs.getInt(1));
}
return OptionalInt.empty();
}
private List<String> executeListQuery(PreparedStatement statement, int sizeHint) throws SQLException {
List<String> ret = new ArrayList<>(sizeHint);
var rs = statement.executeQuery();
while (rs.next()) {
ret.add(rs.getString(1));
}
return ret;
}
@Override
public void close() throws SQLException {
knownUrlsQuery.close();
goodUrlsQuery.close();
visitedUrlsQuery.close();
allDomainsQuery.close();
crawlQueueDomains.close();
domainNameToId.close();
connection.close();
}
}

View File

@@ -89,7 +89,7 @@ public class DatabaseModule extends AbstractModule {
config.addDataSourceProperty("prepStmtCacheSize", "250"); config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048"); config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.setMaximumPoolSize(5); config.setMaximumPoolSize(Integer.getInteger("db.poolSize", 5));
config.setMinimumIdle(2); config.setMinimumIdle(2);
config.setMaxLifetime(Duration.ofMinutes(9).toMillis()); config.setMaxLifetime(Duration.ofMinutes(9).toMillis());

View File

@@ -20,6 +20,7 @@ public enum ExecutorActor {
EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), EXPORT_FEEDS(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), EXPORT_SAMPLE_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED), DOWNLOAD_SAMPLE(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
MIGRATE_CRAWL_DATA(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED),
PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD), PROC_CONVERTER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),
PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD), PROC_LOADER_SPAWNER(NodeProfile.BATCH_CRAWL, NodeProfile.MIXED, NodeProfile.SIDELOAD),

View File

@@ -66,6 +66,7 @@ public class ExecutorActorControlService {
DownloadSampleActor downloadSampleActor, DownloadSampleActor downloadSampleActor,
ScrapeFeedsActor scrapeFeedsActor, ScrapeFeedsActor scrapeFeedsActor,
ExecutorActorStateMachines stateMachines, ExecutorActorStateMachines stateMachines,
MigrateCrawlDataActor migrateCrawlDataActor,
ExportAllPrecessionActor exportAllPrecessionActor, ExportAllPrecessionActor exportAllPrecessionActor,
UpdateRssActor updateRssActor) throws SQLException { UpdateRssActor updateRssActor) throws SQLException {
this.messageQueueFactory = messageQueueFactory; this.messageQueueFactory = messageQueueFactory;
@@ -107,6 +108,8 @@ public class ExecutorActorControlService {
register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor); register(ExecutorActor.SCRAPE_FEEDS, scrapeFeedsActor);
register(ExecutorActor.UPDATE_RSS, updateRssActor); register(ExecutorActor.UPDATE_RSS, updateRssActor);
register(ExecutorActor.MIGRATE_CRAWL_DATA, migrateCrawlDataActor);
if (serviceConfiguration.node() == 1) { if (serviceConfiguration.node() == 1) {
register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor); register(ExecutorActor.PREC_EXPORT_ALL, exportAllPrecessionActor);
} }

View File

@@ -0,0 +1,130 @@
package nu.marginalia.actor.task;
import com.google.gson.Gson;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import nu.marginalia.actor.prototype.RecordActorPrototype;
import nu.marginalia.actor.state.ActorStep;
import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.process.log.WorkLog;
import nu.marginalia.process.log.WorkLogEntry;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId;
import org.apache.logging.log4j.util.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Map;
import java.util.Optional;
import java.util.function.Function;
@Singleton
public class MigrateCrawlDataActor extends RecordActorPrototype {
private final FileStorageService fileStorageService;
private static final Logger logger = LoggerFactory.getLogger(MigrateCrawlDataActor.class);
@Inject
public MigrateCrawlDataActor(Gson gson, FileStorageService fileStorageService) {
super(gson);
this.fileStorageService = fileStorageService;
}
public record Run(long fileStorageId) implements ActorStep {}
@Override
public ActorStep transition(ActorStep self) throws Exception {
return switch (self) {
case Run(long fileStorageId) -> {
FileStorage storage = fileStorageService.getStorage(FileStorageId.of(fileStorageId));
Path root = storage.asPath();
Path crawlerLog = root.resolve("crawler.log");
Path newCrawlerLog = Files.createTempFile(root, "crawler", ".migrate.log");
try (WorkLog workLog = new WorkLog(newCrawlerLog)) {
for (Map.Entry<WorkLogEntry, Path> item : WorkLog.iterableMap(crawlerLog, new CrawlDataLocator(root))) {
var entry = item.getKey();
var path = item.getValue();
logger.info("Converting {}", entry.id());
if (path.toFile().getName().endsWith(".parquet")) {
String domain = entry.id();
String id = Integer.toHexString(domain.hashCode());
Path outputFile = CrawlerOutputFile.createSlopPath(root, id, domain);
SlopCrawlDataRecord.convertFromParquet(path, outputFile);
workLog.setJobToFinished(entry.id(), outputFile.toString(), entry.cnt());
}
else {
workLog.setJobToFinished(entry.id(), path.toString(), entry.cnt());
}
}
}
Path oldCrawlerLog = Files.createTempFile(root, "crawler-", ".migrate.old.log");
Files.move(crawlerLog, oldCrawlerLog);
Files.move(newCrawlerLog, crawlerLog);
yield new End();
}
default -> new Error();
};
}
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Map.Entry<WorkLogEntry, Path>>> {
private final Path crawlRootDir;
CrawlDataLocator(Path crawlRootDir) {
this.crawlRootDir = crawlRootDir;
}
@Override
public Optional<Map.Entry<WorkLogEntry, Path>> apply(WorkLogEntry entry) {
var path = getCrawledFilePath(crawlRootDir, entry.path());
if (!Files.exists(path)) {
return Optional.empty();
}
try {
return Optional.of(Map.entry(entry, path));
}
catch (Exception ex) {
return Optional.empty();
}
}
private Path getCrawledFilePath(Path crawlDir, String fileName) {
int sp = fileName.lastIndexOf('/');
// Normalize the filename
if (sp >= 0 && sp + 1< fileName.length())
fileName = fileName.substring(sp + 1);
if (fileName.length() < 4)
fileName = Strings.repeat("0", 4 - fileName.length()) + fileName;
String sp1 = fileName.substring(0, 2);
String sp2 = fileName.substring(2, 4);
return crawlDir.resolve(sp1).resolve(sp2).resolve(fileName);
}
}
@Override
public String describe() {
return "Migrates crawl data to the latest format";
}
}

View File

@@ -15,7 +15,9 @@ import java.util.Map;
/** Client for local browserless.io API */ /** Client for local browserless.io API */
public class BrowserlessClient implements AutoCloseable { public class BrowserlessClient implements AutoCloseable {
private static final Logger logger = LoggerFactory.getLogger(BrowserlessClient.class); private static final Logger logger = LoggerFactory.getLogger(BrowserlessClient.class);
private static final String BROWSERLESS_TOKEN = System.getProperty("live-capture.browserless-token", "BROWSERLESS_TOKEN");
private final HttpClient httpClient = HttpClient.newBuilder() private final HttpClient httpClient = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_1_1) .version(HttpClient.Version.HTTP_1_1)
@@ -36,7 +38,7 @@ public class BrowserlessClient implements AutoCloseable {
); );
var request = HttpRequest.newBuilder() var request = HttpRequest.newBuilder()
.uri(browserlessURI.resolve("/content")) .uri(browserlessURI.resolve("/content?token="+BROWSERLESS_TOKEN))
.method("POST", HttpRequest.BodyPublishers.ofString( .method("POST", HttpRequest.BodyPublishers.ofString(
gson.toJson(requestData) gson.toJson(requestData)
)) ))
@@ -63,7 +65,7 @@ public class BrowserlessClient implements AutoCloseable {
); );
var request = HttpRequest.newBuilder() var request = HttpRequest.newBuilder()
.uri(browserlessURI.resolve("/screenshot")) .uri(browserlessURI.resolve("/screenshot?token="+BROWSERLESS_TOKEN))
.method("POST", HttpRequest.BodyPublishers.ofString( .method("POST", HttpRequest.BodyPublishers.ofString(
gson.toJson(requestData) gson.toJson(requestData)
)) ))

View File

@@ -1,6 +1,6 @@
package nu.marginalia.rss.model; package nu.marginalia.rss.model;
import com.apptasticsoftware.rssreader.Item; import nu.marginalia.rss.svc.SimpleFeedParser;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.NotNull;
import org.jsoup.Jsoup; import org.jsoup.Jsoup;
@@ -18,37 +18,33 @@ public record FeedItem(String title,
public static final int MAX_DESC_LENGTH = 255; public static final int MAX_DESC_LENGTH = 255;
public static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ"); public static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
public static FeedItem fromItem(Item item, boolean keepFragment) { public static FeedItem fromItem(SimpleFeedParser.ItemData item, boolean keepFragment) {
String title = item.getTitle().orElse(""); String title = item.title();
String date = getItemDate(item); String date = getItemDate(item);
String description = getItemDescription(item); String description = getItemDescription(item);
String url; String url;
if (keepFragment || item.getLink().isEmpty()) { if (keepFragment) {
url = item.getLink().orElse(""); url = item.url();
} }
else { else {
try { try {
String link = item.getLink().get(); String link = item.url();
var linkUri = new URI(link); var linkUri = new URI(link);
var cleanUri = new URI(linkUri.getScheme(), linkUri.getAuthority(), linkUri.getPath(), linkUri.getQuery(), null); var cleanUri = new URI(linkUri.getScheme(), linkUri.getAuthority(), linkUri.getPath(), linkUri.getQuery(), null);
url = cleanUri.toString(); url = cleanUri.toString();
} }
catch (Exception e) { catch (Exception e) {
// fallback to original link if we can't clean it, this is not a very important step // fallback to original link if we can't clean it, this is not a very important step
url = item.getLink().get(); url = item.url();
} }
} }
return new FeedItem(title, date, description, url); return new FeedItem(title, date, description, url);
} }
private static String getItemDescription(Item item) { private static String getItemDescription(SimpleFeedParser.ItemData item) {
Optional<String> description = item.getDescription(); String rawDescription = item.description();
if (description.isEmpty())
return "";
String rawDescription = description.get();
if (rawDescription.indexOf('<') >= 0) { if (rawDescription.indexOf('<') >= 0) {
rawDescription = Jsoup.parseBodyFragment(rawDescription).text(); rawDescription = Jsoup.parseBodyFragment(rawDescription).text();
} }
@@ -58,15 +54,18 @@ public record FeedItem(String title,
// e.g. http://fabiensanglard.net/rss.xml does dates like this: 1 Apr 2021 00:00:00 +0000 // e.g. http://fabiensanglard.net/rss.xml does dates like this: 1 Apr 2021 00:00:00 +0000
private static final DateTimeFormatter extraFormatter = DateTimeFormatter.ofPattern("d MMM yyyy HH:mm:ss Z"); private static final DateTimeFormatter extraFormatter = DateTimeFormatter.ofPattern("d MMM yyyy HH:mm:ss Z");
private static String getItemDate(Item item) { private static String getItemDate(SimpleFeedParser.ItemData item) {
Optional<ZonedDateTime> zonedDateTime = Optional.empty(); Optional<ZonedDateTime> zonedDateTime = Optional.empty();
try { try {
zonedDateTime = item.getPubDateZonedDateTime(); zonedDateTime = item.getPubDateZonedDateTime();
} }
catch (Exception e) { catch (Exception e) {
zonedDateTime = item.getPubDate() try {
.map(extraFormatter::parse) zonedDateTime = Optional.of(ZonedDateTime.from(extraFormatter.parse(item.pubDate())));
.map(ZonedDateTime::from); }
catch (Exception e2) {
// ignore
}
} }
return zonedDateTime.map(date -> date.format(DATE_FORMAT)).orElse(""); return zonedDateTime.map(date -> date.format(DATE_FORMAT)).orElse("");

View File

@@ -1,7 +1,5 @@
package nu.marginalia.rss.svc; package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.Item;
import com.apptasticsoftware.rssreader.RssReader;
import com.google.inject.Inject; import com.google.inject.Inject;
import com.opencsv.CSVReader; import com.opencsv.CSVReader;
import nu.marginalia.WmsaHome; import nu.marginalia.WmsaHome;
@@ -20,7 +18,6 @@ import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage; import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageType; import nu.marginalia.storage.model.FileStorageType;
import nu.marginalia.util.SimpleBlockingThreadPool; import nu.marginalia.util.SimpleBlockingThreadPool;
import org.apache.commons.io.input.BOMInputStream;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -32,7 +29,6 @@ import java.net.URISyntaxException;
import java.net.http.HttpClient; import java.net.http.HttpClient;
import java.net.http.HttpRequest; import java.net.http.HttpRequest;
import java.net.http.HttpResponse; import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.*; import java.time.*;
import java.time.format.DateTimeFormatter; import java.time.format.DateTimeFormatter;
@@ -48,8 +44,6 @@ public class FeedFetcherService {
private static final int MAX_FEED_ITEMS = 10; private static final int MAX_FEED_ITEMS = 10;
private static final Logger logger = LoggerFactory.getLogger(FeedFetcherService.class); private static final Logger logger = LoggerFactory.getLogger(FeedFetcherService.class);
private final RssReader rssReader = new RssReader();
private final FeedDb feedDb; private final FeedDb feedDb;
private final FileStorageService fileStorageService; private final FileStorageService fileStorageService;
private final NodeConfigurationService nodeConfigurationService; private final NodeConfigurationService nodeConfigurationService;
@@ -72,17 +66,6 @@ public class FeedFetcherService {
this.nodeConfigurationService = nodeConfigurationService; this.nodeConfigurationService = nodeConfigurationService;
this.serviceHeartbeat = serviceHeartbeat; this.serviceHeartbeat = serviceHeartbeat;
this.executorClient = executorClient; this.executorClient = executorClient;
// Add support for some alternate date tags for atom
rssReader.addItemExtension("issued", this::setDateFallback);
rssReader.addItemExtension("created", this::setDateFallback);
}
private void setDateFallback(Item item, String value) {
if (item.getPubDate().isEmpty()) {
item.setPubDate(value);
}
} }
public enum UpdateMode { public enum UpdateMode {
@@ -371,12 +354,7 @@ public class FeedFetcherService {
public FeedItems parseFeed(String feedData, FeedDefinition definition) { public FeedItems parseFeed(String feedData, FeedDefinition definition) {
try { try {
feedData = sanitizeEntities(feedData); List<SimpleFeedParser.ItemData> rawItems = SimpleFeedParser.parse(feedData);
List<Item> rawItems = rssReader.read(
// Massage the data to maximize the possibility of the flaky XML parser consuming it
new BOMInputStream(new ByteArrayInputStream(feedData.trim().getBytes(StandardCharsets.UTF_8)), false)
).toList();
boolean keepUriFragment = rawItems.size() < 2 || areFragmentsDisparate(rawItems); boolean keepUriFragment = rawItems.size() < 2 || areFragmentsDisparate(rawItems);
@@ -399,33 +377,6 @@ public class FeedFetcherService {
} }
} }
private static final Map<String, String> HTML_ENTITIES = Map.of(
"&raquo;", "»",
"&laquo;", "«",
"&mdash;", "--",
"&ndash;", "-",
"&rsquo;", "'",
"&lsquo;", "'",
"&quot;", "\"",
"&nbsp;", ""
);
/** The XML parser will blow up if you insert HTML entities in the feed XML,
* which is unfortunately relatively common. Replace them as far as is possible
* with their corresponding characters
*/
static String sanitizeEntities(String feedData) {
String result = feedData;
for (Map.Entry<String, String> entry : HTML_ENTITIES.entrySet()) {
result = result.replace(entry.getKey(), entry.getValue());
}
// Handle lone ampersands not part of a recognized XML entity
result = result.replaceAll("&(?!(amp|lt|gt|apos|quot);)", "&amp;");
return result;
}
/** Decide whether to keep URI fragments in the feed items. /** Decide whether to keep URI fragments in the feed items.
* <p></p> * <p></p>
* We keep fragments if there are multiple different fragments in the items. * We keep fragments if there are multiple different fragments in the items.
@@ -433,16 +384,16 @@ public class FeedFetcherService {
* @param items The items to check * @param items The items to check
* @return True if we should keep the fragments, false otherwise * @return True if we should keep the fragments, false otherwise
*/ */
private boolean areFragmentsDisparate(List<Item> items) { private boolean areFragmentsDisparate(List<SimpleFeedParser.ItemData> items) {
Set<String> seenFragments = new HashSet<>(); Set<String> seenFragments = new HashSet<>();
try { try {
for (var item : items) { for (var item : items) {
if (item.getLink().isEmpty()) { if (item.url().isBlank()) {
continue; continue;
} }
var link = item.getLink().get(); var link = item.url();
if (!link.contains("#")) { if (!link.contains("#")) {
continue; continue;
} }

View File

@@ -10,6 +10,7 @@ import java.io.IOException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.function.BiConsumer;
/** Utility for recording fetched feeds to a journal, useful in debugging feed parser issues. /** Utility for recording fetched feeds to a journal, useful in debugging feed parser issues.
*/ */
@@ -59,6 +60,17 @@ public interface FeedJournal extends AutoCloseable {
urlWriter.put(url); urlWriter.put(url);
contentsWriter.put(contents); contentsWriter.put(contents);
} }
}
static void replay(Path journalPath, BiConsumer<String, String> urlAndContent) throws IOException {
try (SlopTable table = new SlopTable(journalPath)) {
final StringColumn.Reader urlReader = urlColumn.open(table);
final StringColumn.Reader contentsReader = contentsColumn.open(table);
while (urlReader.hasRemaining()) {
urlAndContent.accept(urlReader.get(), contentsReader.get());
}
}
} }
} }

View File

@@ -0,0 +1,94 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.DateTimeParser;
import com.apptasticsoftware.rssreader.util.Default;
import org.jsoup.Jsoup;
import org.jsoup.parser.Parser;
import java.time.ZonedDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
public class SimpleFeedParser {
private static final DateTimeParser dateTimeParser = Default.getDateTimeParser();
public record ItemData (
String title,
String description,
String url,
String pubDate
) {
public boolean isWellFormed() {
return title != null && !title.isBlank() &&
description != null && !description.isBlank() &&
url != null && !url.isBlank() &&
pubDate != null && !pubDate.isBlank();
}
public Optional<ZonedDateTime> getPubDateZonedDateTime() {
try {
return Optional.ofNullable(dateTimeParser.parse(pubDate()));
}
catch (Exception e) {
return Optional.empty();
}
}
}
public static List<ItemData> parse(String content) {
var doc = Jsoup.parse(content, Parser.xmlParser());
List<ItemData> ret = new ArrayList<>();
doc.select("item, entry").forEach(element -> {
String link = "";
String title = "";
String description = "";
String pubDate = "";
for (String attr : List.of("title", "dc:title")) {
if (!title.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
title = tag.text();
}
}
for (String attr : List.of("title", "summary", "content", "description", "dc:description")) {
if (!description.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
description = tag.text();
}
}
for (String attr : List.of("pubDate", "published", "updated", "issued", "created", "dc:date")) {
if (!pubDate.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
pubDate = tag.text();
}
}
for (String attr : List.of("link", "url")) {
if (!link.isBlank())
break;
var tag = element.getElementsByTag(attr).first();
if (tag != null) {
link = tag.text();
}
}
ret.add(new ItemData(title, description, link, pubDate));
});
return ret;
}
}

View File

@@ -2,16 +2,21 @@ package nu.marginalia.livecapture;
import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll; import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Testcontainers; import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName; import org.testcontainers.utility.DockerImageName;
import java.net.URI; import java.net.URI;
import java.util.Map;
@Testcontainers @Testcontainers
@Tag("slow")
public class BrowserlessClientTest { public class BrowserlessClientTest {
static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome")).withExposedPorts(3000); static GenericContainer<?> container = new GenericContainer<>(DockerImageName.parse("browserless/chrome"))
.withEnv(Map.of("TOKEN", "BROWSERLESS_TOKEN"))
.withExposedPorts(3000);
@BeforeAll @BeforeAll
public static void setup() { public static void setup() {

View File

@@ -1,50 +0,0 @@
package nu.marginalia.rss.svc;
import com.apptasticsoftware.rssreader.Item;
import com.apptasticsoftware.rssreader.RssReader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import java.util.List;
import java.util.Optional;
public class TestXmlSanitization {
@Test
public void testPreservedEntities() {
Assertions.assertEquals("&amp;", FeedFetcherService.sanitizeEntities("&amp;"));
Assertions.assertEquals("&lt;", FeedFetcherService.sanitizeEntities("&lt;"));
Assertions.assertEquals("&gt;", FeedFetcherService.sanitizeEntities("&gt;"));
Assertions.assertEquals("&apos;", FeedFetcherService.sanitizeEntities("&apos;"));
}
@Test
public void testNlnetTitleTag() {
// The NLnet atom feed puts HTML tags in the entry/title tags, which breaks the vanilla RssReader code
// Verify we're able to consume and strip out the HTML tags
RssReader r = new RssReader();
List<Item> items = r.read(ClassLoader.getSystemResourceAsStream("nlnet.atom")).toList();
Assertions.assertEquals(1, items.size());
for (var item : items) {
Assertions.assertEquals(Optional.of("50 Free and Open Source Projects Selected for NGI Zero grants"), item.getTitle());
}
}
@Test
public void testStrayAmpersand() {
Assertions.assertEquals("Bed &amp; Breakfast", FeedFetcherService.sanitizeEntities("Bed & Breakfast"));
}
@Test
public void testTranslatedHtmlEntity() {
Assertions.assertEquals("Foo -- Bar", FeedFetcherService.sanitizeEntities("Foo &mdash; Bar"));
}
@Test
public void testTranslatedHtmlEntityQuot() {
Assertions.assertEquals("\"Bob\"", FeedFetcherService.sanitizeEntities("&quot;Bob&quot;"));
}
}

View File

@@ -16,20 +16,19 @@ import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Comparator; import java.util.Comparator;
import java.util.Iterator;
import java.util.List; import java.util.List;
import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService; import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors; import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
import static java.lang.Math.clamp; import java.util.function.Consumer;
@Singleton @Singleton
public class IndexClient { public class IndexClient {
private static final Logger logger = LoggerFactory.getLogger(IndexClient.class); private static final Logger logger = LoggerFactory.getLogger(IndexClient.class);
private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool; private final GrpcMultiNodeChannelPool<IndexApiGrpc.IndexApiBlockingStub> channelPool;
private final DomainBlacklistImpl blacklist; private final DomainBlacklistImpl blacklist;
private static final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); private static final ExecutorService executor = Executors.newCachedThreadPool();
@Inject @Inject
public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) { public IndexClient(GrpcChannelPoolFactory channelPoolFactory, DomainBlacklistImpl blacklist) {
@@ -51,40 +50,37 @@ public class IndexClient {
/** Execute a query on the index partitions and return the combined results. */ /** Execute a query on the index partitions and return the combined results. */
public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) { public AggregateQueryResponse executeQueries(RpcIndexQuery indexRequest, Pagination pagination) {
List<CompletableFuture<Iterator<RpcDecoratedResultItem>>> futures =
channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
.async(executor)
.runEach(indexRequest);
final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal(); final int requestedMaxResults = indexRequest.getQueryLimits().getResultsTotal();
final int resultsUpperBound = requestedMaxResults * channelPool.getNumNodes();
List<RpcDecoratedResultItem> results = new ArrayList<>(resultsUpperBound); AtomicInteger totalNumResults = new AtomicInteger(0);
for (var future : futures) { List<RpcDecoratedResultItem> results =
try { channelPool.call(IndexApiGrpc.IndexApiBlockingStub::query)
future.get().forEachRemaining(results::add); .async(executor)
} .runEach(indexRequest)
catch (Exception e) { .stream()
logger.error("Downstream exception", e); .map(future -> future.thenApply(iterator -> {
} List<RpcDecoratedResultItem> ret = new ArrayList<>(requestedMaxResults);
} iterator.forEachRemaining(ret::add);
totalNumResults.addAndGet(ret.size());
return ret;
}))
.mapMulti((CompletableFuture<List<RpcDecoratedResultItem>> fut, Consumer<List<RpcDecoratedResultItem>> c) ->{
try {
c.accept(fut.join());
} catch (Exception e) {
logger.error("Error while fetching results", e);
}
})
.flatMap(List::stream)
.filter(item -> !isBlacklisted(item))
.sorted(comparator)
.skip(Math.max(0, (pagination.page - 1) * pagination.pageSize))
.limit(pagination.pageSize)
.toList();
// Sort the results by ranking score and remove blacklisted domains return new AggregateQueryResponse(results, pagination.page(), totalNumResults.get());
results.sort(comparator);
results.removeIf(this::isBlacklisted);
int numReceivedResults = results.size();
// pagination is typically 1-indexed, so we need to adjust the start and end indices
int indexStart = (pagination.page - 1) * pagination.pageSize;
int indexEnd = (pagination.page) * pagination.pageSize;
results = results.subList(
clamp(indexStart, 0, Math.max(0, results.size() - 1)), // from is inclusive, so subtract 1 from size()
clamp(indexEnd, 0, results.size()));
return new AggregateQueryResponse(results, pagination.page(), numReceivedResults);
} }
private boolean isBlacklisted(RpcDecoratedResultItem item) { private boolean isBlacklisted(RpcDecoratedResultItem item) {

View File

@@ -0,0 +1,119 @@
package nu.marginalia.index.results;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import gnu.trove.map.hash.TIntDoubleHashMap;
import nu.marginalia.WmsaHome;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import java.util.OptionalInt;
import java.util.concurrent.TimeUnit;
@Singleton
public class DomainRankingOverrides {
private final DbDomainQueries domainQueries;
private volatile TIntDoubleHashMap rankingFactors = new TIntDoubleHashMap(100, 0.75f, -1, 1.);
private static final Logger logger = LoggerFactory.getLogger(DomainRankingOverrides.class);
private final Path overrideFilePath;
@Inject
public DomainRankingOverrides(DbDomainQueries domainQueries) {
this.domainQueries = domainQueries;
overrideFilePath = WmsaHome.getDataPath().resolve("domain-ranking-factors.txt");
Thread.ofPlatform().start(this::updateRunner);
}
// for test access
public DomainRankingOverrides(DbDomainQueries domainQueries, Path overrideFilePath)
{
this.domainQueries = domainQueries;
this.overrideFilePath = overrideFilePath;
}
public double getRankingFactor(int domainId) {
return rankingFactors.get(domainId);
}
private void updateRunner() {
for (;;) {
reloadFile();
try {
TimeUnit.MINUTES.sleep(5);
} catch (InterruptedException ex) {
logger.warn("Thread interrupted", ex);
break;
}
}
}
void reloadFile() {
if (!Files.exists(overrideFilePath)) {
return;
}
try {
List<String> lines = Files.readAllLines(overrideFilePath);
double factor = 1.;
var newRankingFactors = new TIntDoubleHashMap(lines.size(), 0.75f, -1, 1.);
for (var line : lines) {
if (line.isBlank()) continue;
if (line.startsWith("#")) continue;
String[] parts = line.split("\\s+");
if (parts.length != 2) {
logger.warn("Unrecognized format for domain overrides file: {}", line);
continue;
}
try {
switch (parts[0]) {
case "value" -> {
// error handle me
factor = Double.parseDouble(parts[1]);
if (factor < 0) {
logger.error("Negative values are not permitted, found {}", factor);
factor = 1;
}
}
case "domain" -> {
// error handle
OptionalInt domainId = domainQueries.tryGetDomainId(new EdgeDomain(parts[1]));
if (domainId.isPresent()) {
newRankingFactors.put(domainId.getAsInt(), factor);
}
else {
logger.warn("Unrecognized domain id {}", parts[1]);
}
}
default -> {
logger.warn("Unrecognized format {}", line);
}
}
} catch (Exception ex) {
logger.warn("Error in parsing domain overrides file: {} ({})", line, ex.getClass().getSimpleName());
}
}
rankingFactors = newRankingFactors;
} catch (IOException ex) {
logger.error("Failed to read " + overrideFilePath, ex);
}
}
}

View File

@@ -40,13 +40,16 @@ public class IndexResultRankingService {
private final DocumentDbReader documentDbReader; private final DocumentDbReader documentDbReader;
private final StatefulIndex statefulIndex; private final StatefulIndex statefulIndex;
private final DomainRankingOverrides domainRankingOverrides;
@Inject @Inject
public IndexResultRankingService(DocumentDbReader documentDbReader, public IndexResultRankingService(DocumentDbReader documentDbReader,
StatefulIndex statefulIndex) StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides)
{ {
this.documentDbReader = documentDbReader; this.documentDbReader = documentDbReader;
this.statefulIndex = statefulIndex; this.statefulIndex = statefulIndex;
this.domainRankingOverrides = domainRankingOverrides;
} }
public List<SearchResultItem> rankResults(SearchParameters params, public List<SearchResultItem> rankResults(SearchParameters params,
@@ -57,7 +60,7 @@ public class IndexResultRankingService {
if (resultIds.isEmpty()) if (resultIds.isEmpty())
return List.of(); return List.of();
IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, rankingContext, params); IndexResultScoreCalculator resultRanker = new IndexResultScoreCalculator(statefulIndex, domainRankingOverrides, rankingContext, params);
List<SearchResultItem> results = new ArrayList<>(resultIds.size()); List<SearchResultItem> results = new ArrayList<>(resultIds.size());

View File

@@ -41,14 +41,17 @@ public class IndexResultScoreCalculator {
private final CombinedIndexReader index; private final CombinedIndexReader index;
private final QueryParams queryParams; private final QueryParams queryParams;
private final DomainRankingOverrides domainRankingOverrides;
private final ResultRankingContext rankingContext; private final ResultRankingContext rankingContext;
private final CompiledQuery<String> compiledQuery; private final CompiledQuery<String> compiledQuery;
public IndexResultScoreCalculator(StatefulIndex statefulIndex, public IndexResultScoreCalculator(StatefulIndex statefulIndex,
DomainRankingOverrides domainRankingOverrides,
ResultRankingContext rankingContext, ResultRankingContext rankingContext,
SearchParameters params) SearchParameters params)
{ {
this.index = statefulIndex.get(); this.index = statefulIndex.get();
this.domainRankingOverrides = domainRankingOverrides;
this.rankingContext = rankingContext; this.rankingContext = rankingContext;
this.queryParams = params.queryParams; this.queryParams = params.queryParams;
@@ -127,10 +130,10 @@ public class IndexResultScoreCalculator {
* wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.getBm25K(), wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext)) * wordFlagsQuery.root.visit(new TermFlagsGraphVisitor(params.getBm25K(), wordFlagsQuery.data, unorderedMatches.getWeightedCounts(), rankingContext))
/ (Math.sqrt(unorderedMatches.searchableKeywordCount + 1)); / (Math.sqrt(unorderedMatches.searchableKeywordCount + 1));
double rankingAdjustment = domainRankingOverrides.getRankingFactor(UrlIdCodec.getDomainId(combinedId));
double score = normalize( double score = normalize(
score_firstPosition + score_proximity + score_verbatim rankingAdjustment * (score_firstPosition + score_proximity + score_verbatim + score_bM25 + score_bFlags),
+ score_bM25
+ score_bFlags,
-Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0 -Math.min(0, documentBonus) // The magnitude of documentBonus, if it is negative; otherwise 0
); );
@@ -580,3 +583,4 @@ public class IndexResultScoreCalculator {
} }
} }

View File

@@ -0,0 +1,103 @@
package nu.marginalia.index.results;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import nu.marginalia.db.DbDomainQueries;
import nu.marginalia.model.EdgeDomain;
import nu.marginalia.test.TestMigrationLoader;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.parallel.Execution;
import org.junit.jupiter.api.parallel.ExecutionMode;
import org.testcontainers.containers.MariaDBContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.sql.SQLException;
@Testcontainers
@Execution(ExecutionMode.SAME_THREAD)
@Tag("slow")
class DomainRankingOverridesTest {
@Container
static MariaDBContainer<?> mariaDBContainer = new MariaDBContainer<>("mariadb")
.withDatabaseName("WMSA_prod")
.withUsername("wmsa")
.withPassword("wmsa")
.withNetworkAliases("mariadb");
private static DbDomainQueries domainQueries;
@BeforeAll
public static void setup() throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(mariaDBContainer.getJdbcUrl());
config.setUsername("wmsa");
config.setPassword("wmsa");
var dataSource = new HikariDataSource(config);
TestMigrationLoader.flywayMigration(dataSource);
try (var conn = dataSource.getConnection();
var stmt = conn.createStatement()) {
stmt.executeQuery("DELETE FROM EC_DOMAIN"); // Wipe any old state from other test runs
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('first.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('second.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('third.example.com', 'example.com', 1)");
stmt.executeQuery("INSERT INTO EC_DOMAIN (DOMAIN_NAME, DOMAIN_TOP, NODE_AFFINITY) VALUES ('not-added.example.com', 'example.com', 1)");
}
domainQueries = new DbDomainQueries(dataSource);
}
@Test
public void test() throws IOException {
Path overridesFile = Files.createTempFile(getClass().getSimpleName(), ".txt");
try {
Files.writeString(overridesFile, """
# A comment
value 0.75
domain first.example.com
domain second.example.com
value 1.1
domain third.example.com
""",
StandardOpenOption.APPEND);
var overrides = new DomainRankingOverrides(domainQueries, overridesFile);
overrides.reloadFile();
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("first.example.com"))
));
Assertions.assertEquals(0.75, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("second.example.com"))
));
Assertions.assertEquals(1.1, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("third.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(
domainQueries.getDomainId(new EdgeDomain("not-added.example.com"))
));
Assertions.assertEquals(1.0, overrides.getRankingFactor(1<<23));
}
finally {
Files.deleteIfExists(overridesFile);
}
}
}

View File

@@ -45,6 +45,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
dataColumn.openUnregistered(uri, page), dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class GammaCodedSequenceArrayColumn extends AbstractObjectColumn<List<Gam
dataReader.skip(toSkip); dataReader.skip(toSkip);
} }
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override @Override
public boolean hasRemaining() throws IOException { public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining(); return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
Storage.reader(uri, this, page, false), Storage.reader(uri, this, page, false),
@@ -96,6 +101,11 @@ public class GammaCodedSequenceColumn extends AbstractObjectColumn<GammaCodedSeq
this.indexReader = indexReader; this.indexReader = indexReader;
} }
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override @Override
public AbstractColumn<?, ?> columnDesc() { public AbstractColumn<?, ?> columnDesc() {
return GammaCodedSequenceColumn.this; return GammaCodedSequenceColumn.this;

View File

@@ -45,6 +45,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
); );
} }
@Override
public int alignmentSize() {
return 0;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
dataColumn.openUnregistered(uri, page), dataColumn.openUnregistered(uri, page),
@@ -109,6 +114,11 @@ public class VarintCodedSequenceArrayColumn extends AbstractObjectColumn<List<Va
dataReader.skip(toSkip); dataReader.skip(toSkip);
} }
@Override
public boolean isDirect() {
return dataReader.isDirect();
}
@Override @Override
public boolean hasRemaining() throws IOException { public boolean hasRemaining() throws IOException {
return groupsReader.hasRemaining(); return groupsReader.hasRemaining();

View File

@@ -44,6 +44,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
); );
} }
@Override
public int alignmentSize() {
return 1;
}
public Reader openUnregistered(URI uri, int page) throws IOException { public Reader openUnregistered(URI uri, int page) throws IOException {
return new Reader( return new Reader(
Storage.reader(uri, this, page, false), Storage.reader(uri, this, page, false),
@@ -101,6 +106,11 @@ public class VarintCodedSequenceColumn extends AbstractObjectColumn<VarintCodedS
return VarintCodedSequenceColumn.this; return VarintCodedSequenceColumn.this;
} }
@Override
public boolean isDirect() {
return storage.isDirect();
}
@Override @Override
public void skip(long positions) throws IOException { public void skip(long positions) throws IOException {
for (int i = 0; i < positions; i++) { for (int i = 0; i < positions; i++) {

View File

@@ -155,8 +155,15 @@ public class SentenceExtractor {
public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) { public List<DocumentSentence> extractSentencesFromString(String text, EnumSet<HtmlTag> htmlTags) {
String[] sentences; String[] sentences;
// Normalize spaces // Safety net against malformed data DOS attacks,
// found 5+ MB <p>-tags in the wild that just break
// the sentence extractor causing it to stall forever.
if (text.length() > 50_000) {
// 50k chars can hold a small novel, let alone single html tags
text = text.substring(0, 50_000);
}
// Normalize spaces
text = normalizeSpaces(text); text = normalizeSpaces(text);
// Split into sentences // Split into sentences

View File

@@ -12,7 +12,6 @@ import nu.marginalia.converting.sideload.SideloadSourceFactory;
import nu.marginalia.converting.writer.ConverterBatchWritableIf; import nu.marginalia.converting.writer.ConverterBatchWritableIf;
import nu.marginalia.converting.writer.ConverterBatchWriter; import nu.marginalia.converting.writer.ConverterBatchWriter;
import nu.marginalia.converting.writer.ConverterWriter; import nu.marginalia.converting.writer.ConverterWriter;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.mq.MessageQueueFactory; import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.mqapi.converting.ConvertRequest; import nu.marginalia.mqapi.converting.ConvertRequest;
@@ -51,6 +50,7 @@ public class ConverterMain extends ProcessMainClass {
private final ProcessHeartbeat heartbeat; private final ProcessHeartbeat heartbeat;
private final FileStorageService fileStorageService; private final FileStorageService fileStorageService;
private final SideloadSourceFactory sideloadSourceFactory; private final SideloadSourceFactory sideloadSourceFactory;
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
public static void main(String... args) throws Exception { public static void main(String... args) throws Exception {
@@ -201,12 +201,20 @@ public class ConverterMain extends ProcessMainClass {
processedDomains.set(batchingWorkLog.size()); processedDomains.set(batchingWorkLog.size());
heartbeat.setProgress(processedDomains.get() / (double) totalDomains); heartbeat.setProgress(processedDomains.get() / (double) totalDomains);
for (var domain : WorkLog.iterableMap(crawlDir.getLogFile(), logger.info("Processing small items");
int numBigTasks = 0;
// First process the small items
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog))) new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{ {
if (SerializableCrawlDataStream.getSizeHint(dataPath) >= SIDELOAD_THRESHOLD) {
numBigTasks ++;
continue;
}
pool.submit(() -> { pool.submit(() -> {
try { try (var dataStream = SerializableCrawlDataStream.openDataStream(dataPath)) {
ConverterBatchWritableIf writable = processor.createWritable(domain); ConverterBatchWritableIf writable = processor.fullProcessing(dataStream) ;
converterWriter.accept(writable); converterWriter.accept(writable);
} }
catch (Exception ex) { catch (Exception ex) {
@@ -225,10 +233,46 @@ public class ConverterMain extends ProcessMainClass {
do { do {
System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining"); System.out.println("Waiting for pool to terminate... " + pool.getActiveCount() + " remaining");
} while (!pool.awaitTermination(60, TimeUnit.SECONDS)); } while (!pool.awaitTermination(60, TimeUnit.SECONDS));
logger.info("Processing large items");
try (var hb = heartbeat.createAdHocTaskHeartbeat("Large Domains")) {
int bigTaskIdx = 0;
// Next the big items domain-by-domain
for (var dataPath : WorkLog.iterableMap(crawlDir.getLogFile(),
new CrawlDataLocator(crawlDir.getDir(), batchingWorkLog)))
{
int sizeHint = SerializableCrawlDataStream.getSizeHint(dataPath);
if (sizeHint < SIDELOAD_THRESHOLD) {
continue;
}
hb.progress(dataPath.toFile().getName(), bigTaskIdx++, numBigTasks);
try {
// SerializableCrawlDataStream is autocloseable, we can't try-with-resources because then it will be
// closed before it's consumed by the converterWriter. Instead, the converterWriter guarantees it
// will close it after it's consumed.
var stream = SerializableCrawlDataStream.openDataStream(dataPath);
ConverterBatchWritableIf writable = processor.simpleProcessing(stream, sizeHint);
converterWriter.accept(writable);
}
catch (Exception ex) {
logger.info("Error in processing", ex);
}
finally {
heartbeat.setProgress(processedDomains.incrementAndGet() / (double) totalDomains);
}
}
}
logger.info("Processing complete");
} }
} }
private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<SerializableCrawlDataStream>> { private static class CrawlDataLocator implements Function<WorkLogEntry, Optional<Path>> {
private final Path crawlRootDir; private final Path crawlRootDir;
private final BatchingWorkLog batchingWorkLog; private final BatchingWorkLog batchingWorkLog;
@@ -239,7 +283,7 @@ public class ConverterMain extends ProcessMainClass {
} }
@Override @Override
public Optional<SerializableCrawlDataStream> apply(WorkLogEntry entry) { public Optional<Path> apply(WorkLogEntry entry) {
if (batchingWorkLog.isItemProcessed(entry.id())) { if (batchingWorkLog.isItemProcessed(entry.id())) {
return Optional.empty(); return Optional.empty();
} }
@@ -252,7 +296,7 @@ public class ConverterMain extends ProcessMainClass {
} }
try { try {
return Optional.of(CrawledDomainReader.createDataStream(path)); return Optional.of(path);
} }
catch (Exception ex) { catch (Exception ex) {
return Optional.empty(); return Optional.empty();

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.WordFlags;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
@@ -91,7 +92,7 @@ public class DocumentProcessor {
DocumentClass documentClass, DocumentClass documentClass,
DocumentDecorator documentDecorator, DocumentDecorator documentDecorator,
DomainLinks externalDomainLinks, DomainLinks externalDomainLinks,
ProcessedDocument ret) throws URISyntaxException, DisqualifiedException ProcessedDocument ret) throws URISyntaxException, IOException, DisqualifiedException
{ {
var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus); var crawlerStatus = CrawlerDocumentStatus.valueOf(crawledDocument.crawlerStatus);
@@ -109,7 +110,7 @@ public class DocumentProcessor {
ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus); ret.state = crawlerStatusToUrlState(crawledDocument.crawlerStatus, crawledDocument.httpStatus);
final var plugin = findPlugin(crawledDocument); AbstractDocumentProcessorPlugin plugin = findPlugin(crawledDocument);
EdgeUrl url = new EdgeUrl(crawledDocument.url); EdgeUrl url = new EdgeUrl(crawledDocument.url);
LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url); LinkTexts linkTexts = anchorTextKeywords.getAnchorTextKeywords(externalDomainLinks, url);

View File

@@ -32,7 +32,6 @@ import java.util.*;
import java.util.regex.Pattern; import java.util.regex.Pattern;
public class DomainProcessor { public class DomainProcessor {
private static final int SIDELOAD_THRESHOLD = Integer.getInteger("converter.sideloadThreshold", 10_000);
private final DocumentProcessor documentProcessor; private final DocumentProcessor documentProcessor;
private final SiteWords siteWords; private final SiteWords siteWords;
private final AnchorTagsSource anchorTagsSource; private final AnchorTagsSource anchorTagsSource;
@@ -54,21 +53,9 @@ public class DomainProcessor {
geoIpDictionary.waitReady(); geoIpDictionary.waitReady();
} }
public ConverterBatchWritableIf createWritable(SerializableCrawlDataStream domain) { public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
final int sizeHint = domain.sizeHint();
if (sizeHint > SIDELOAD_THRESHOLD) {
// If the file is too big, we run a processing mode that doesn't
// require loading the entire dataset into RAM
return sideloadProcessing(domain, sizeHint);
}
return fullProcessing(domain);
}
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) {
try { try {
return new SideloadProcessing(dataStream, sizeHint, extraKeywords); return new SimpleProcessing(dataStream, sizeHint, extraKeywords);
} }
catch (Exception ex) { catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex); logger.warn("Failed to process domain sideload", ex);
@@ -76,9 +63,9 @@ public class DomainProcessor {
} }
} }
public SideloadProcessing sideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) { public SimpleProcessing simpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) {
try { try {
return new SideloadProcessing(dataStream, sizeHint); return new SimpleProcessing(dataStream, sizeHint);
} }
catch (Exception ex) { catch (Exception ex) {
logger.warn("Failed to process domain sideload", ex); logger.warn("Failed to process domain sideload", ex);
@@ -86,22 +73,84 @@ public class DomainProcessor {
} }
} }
public class SideloadProcessing implements ConverterBatchWritableIf, SideloadSource { @Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBodyBytes.length == 0)
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
/** The simple processing track processes documents individually, and does not perform any domain-level analysis.
* This is needed to process extremely large domains, which would otherwise eat up too much RAM.
*/
public class SimpleProcessing implements ConverterBatchWritableIf, SideloadSource {
private final SerializableCrawlDataStream dataStream; private final SerializableCrawlDataStream dataStream;
private final ProcessedDomain domain; private final ProcessedDomain domain;
private final DocumentDecorator documentDecorator; private final DocumentDecorator documentDecorator;
private final Set<String> processedUrls = new HashSet<>(); private final Set<String> processedUrls = new HashSet<>();
private final DomainLinks externalDomainLinks; private final DomainLinks externalDomainLinks;
private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator(); private final LshDocumentDeduplicator deduplicator = new LshDocumentDeduplicator();
private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8, private static final ProcessingIterator.Factory iteratorFactory = ProcessingIterator.factory(8,
Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors()) Integer.getInteger("java.util.concurrent.ForkJoinPool.common.parallelism", Runtime.getRuntime().availableProcessors())
); );
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException { SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint) throws IOException {
this(dataStream, sizeHint, List.of()); this(dataStream, sizeHint, List.of());
} }
SideloadProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException { SimpleProcessing(SerializableCrawlDataStream dataStream, int sizeHint, Collection<String> extraKeywords) throws IOException {
this.dataStream = dataStream; this.dataStream = dataStream;
if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain)) if (!dataStream.hasNext() || !(dataStream.next() instanceof CrawledDomain crawledDomain))
@@ -128,6 +177,7 @@ public class DomainProcessor {
@Override @Override
public Iterator<ProcessedDocument> getDocumentsStream() { public Iterator<ProcessedDocument> getDocumentsStream() {
return iteratorFactory.create((taskConsumer) -> { return iteratorFactory.create((taskConsumer) -> {
while (dataStream.hasNext()) while (dataStream.hasNext())
{ {
if (!(dataStream.next() instanceof CrawledDocument doc)) if (!(dataStream.next() instanceof CrawledDocument doc))
@@ -172,65 +222,6 @@ public class DomainProcessor {
} }
} }
@Nullable
public ProcessedDomain fullProcessing(SerializableCrawlDataStream dataStream) {
try {
if (!dataStream.hasNext()) {
return null;
}
List<ProcessedDocument> docs = new ArrayList<>();
Set<String> processedUrls = new HashSet<>();
if (!(dataStream.next() instanceof CrawledDomain crawledDomain)) {
throw new IllegalStateException("First record must be a domain, was " + dataStream.next().getClass().getSimpleName());
}
DomainLinks externalDomainLinks = anchorTagsSource.getAnchorTags(crawledDomain.getDomain());
DocumentDecorator documentDecorator = new DocumentDecorator();
// Process Domain Record
ProcessedDomain ret = new ProcessedDomain();
processDomain(crawledDomain, ret, documentDecorator);
ret.documents = docs;
// Process Documents
try (var deduplicator = new LshDocumentDeduplicator()) {
while (dataStream.hasNext()) {
if (!(dataStream.next() instanceof CrawledDocument doc))
continue;
if (doc.url == null)
continue;
if (doc.documentBody.isBlank())
continue;
if (!processedUrls.add(doc.url))
continue;
try {
var processedDoc = documentProcessor.process(doc, ret.domain, externalDomainLinks, documentDecorator);
deduplicator.markIfDuplicate(processedDoc);
docs.add(processedDoc);
} catch (Exception ex) {
logger.warn("Failed to process " + doc.url, ex);
}
}
}
// Add late keywords and features from domain-level information
calculateStatistics(ret, externalDomainLinks);
return ret;
}
catch (Exception ex) {
logger.warn("Failed to process domain", ex);
return null;
}
}
private void processDomain(CrawledDomain crawledDomain, private void processDomain(CrawledDomain crawledDomain,
ProcessedDomain domain, ProcessedDomain domain,
DocumentDecorator decorator) DocumentDecorator decorator)

View File

@@ -24,7 +24,7 @@ public class DocumentValuator {
double scriptPenalty = getScriptPenalty(parsedDocument); double scriptPenalty = getScriptPenalty(parsedDocument);
double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument); double chatGptPenalty = getChatGptContentFarmPenalty(parsedDocument);
int rawLength = crawledDocument.documentBody.length(); int rawLength = crawledDocument.documentBodyBytes.length;
if (textLength == 0) { if (textLength == 0) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH); throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LENGTH);

View File

@@ -218,7 +218,10 @@ public class FeatureExtractor {
} }
} }
if (features.contains(HtmlFeature.JS) && adblockSimulator.hasAds(doc.clone())) { if (features.contains(HtmlFeature.JS)
// remove while disabled to get rid of expensive clone() call:
// adblockSimulator.hasAds(doc.clone())
) {
features.add(HtmlFeature.ADVERTISEMENT); features.add(HtmlFeature.ADVERTISEMENT);
} }

View File

@@ -14,6 +14,7 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard; import nu.marginalia.model.html.HtmlStandard;
import javax.annotation.Nullable; import javax.annotation.Nullable;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
@@ -25,7 +26,7 @@ public abstract class AbstractDocumentProcessorPlugin {
this.languageFilter = languageFilter; this.languageFilter = languageFilter;
} }
public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException; public abstract DetailsWithWords createDetails(CrawledDocument crawledDocument, LinkTexts linkTexts, DocumentClass documentClass) throws DisqualifiedException, URISyntaxException, IOException;
public abstract boolean isApplicable(CrawledDocument doc); public abstract boolean isApplicable(CrawledDocument doc);
protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException { protected void checkDocumentLanguage(DocumentLanguageData dld) throws DisqualifiedException {
@@ -86,6 +87,7 @@ public abstract class AbstractDocumentProcessorPlugin {
return this; return this;
} }
public MetaTagsBuilder addPubDate(PubDate pubDate) { public MetaTagsBuilder addPubDate(PubDate pubDate) {
if (pubDate.year() > 1900) { if (pubDate.year() > 1900) {

View File

@@ -6,6 +6,7 @@ import nu.marginalia.converting.model.DisqualifiedException;
import nu.marginalia.converting.model.DocumentHeaders; import nu.marginalia.converting.model.DocumentHeaders;
import nu.marginalia.converting.model.GeneratorType; import nu.marginalia.converting.model.GeneratorType;
import nu.marginalia.converting.model.ProcessedDocumentDetails; import nu.marginalia.converting.model.ProcessedDocumentDetails;
import nu.marginalia.converting.processor.AcceptableAds;
import nu.marginalia.converting.processor.DocumentClass; import nu.marginalia.converting.processor.DocumentClass;
import nu.marginalia.converting.processor.MetaRobotsTag; import nu.marginalia.converting.processor.MetaRobotsTag;
import nu.marginalia.converting.processor.logic.*; import nu.marginalia.converting.processor.logic.*;
@@ -32,11 +33,11 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.html.HtmlStandard; import nu.marginalia.model.html.HtmlStandard;
import nu.marginalia.model.idx.DocumentFlags; import nu.marginalia.model.idx.DocumentFlags;
import nu.marginalia.model.idx.DocumentMetadata; import nu.marginalia.model.idx.DocumentMetadata;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document; import org.jsoup.nodes.Document;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.HashSet; import java.util.HashSet;
@@ -51,7 +52,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
private final double minDocumentQuality; private final double minDocumentQuality;
private final FeatureExtractor featureExtractor; private final FeatureExtractor featureExtractor;
private final TitleExtractor titleExtractor;
private final DocumentKeywordExtractor keywordExtractor; private final DocumentKeywordExtractor keywordExtractor;
private final PubDateSniffer pubDateSniffer; private final PubDateSniffer pubDateSniffer;
@@ -74,7 +74,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
@Named("min-document-quality") Double minDocumentQuality, @Named("min-document-quality") Double minDocumentQuality,
LanguageFilter languageFilter, LanguageFilter languageFilter,
FeatureExtractor featureExtractor, FeatureExtractor featureExtractor,
TitleExtractor titleExtractor,
DocumentKeywordExtractor keywordExtractor, DocumentKeywordExtractor keywordExtractor,
PubDateSniffer pubDateSniffer, PubDateSniffer pubDateSniffer,
DocumentLengthLogic documentLengthLogic, DocumentLengthLogic documentLengthLogic,
@@ -89,7 +88,6 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
this.minDocumentQuality = minDocumentQuality; this.minDocumentQuality = minDocumentQuality;
this.featureExtractor = featureExtractor; this.featureExtractor = featureExtractor;
this.titleExtractor = titleExtractor;
this.keywordExtractor = keywordExtractor; this.keywordExtractor = keywordExtractor;
this.pubDateSniffer = pubDateSniffer; this.pubDateSniffer = pubDateSniffer;
this.metaRobotsTag = metaRobotsTag; this.metaRobotsTag = metaRobotsTag;
@@ -108,19 +106,17 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
public DetailsWithWords createDetails(CrawledDocument crawledDocument, public DetailsWithWords createDetails(CrawledDocument crawledDocument,
LinkTexts linkTexts, LinkTexts linkTexts,
DocumentClass documentClass) DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException { throws DisqualifiedException, URISyntaxException, IOException {
String documentBody = crawledDocument.documentBody; if (languageFilter.isBlockedUnicodeRange(crawledDocument.documentBody(512))) {
if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualificationReason.LANGUAGE); throw new DisqualifiedException(DisqualificationReason.LANGUAGE);
} }
if (documentBody.length() > MAX_DOCUMENT_LENGTH_BYTES) { // 128kb Document doc = crawledDocument.parseBody();
documentBody = documentBody.substring(0, MAX_DOCUMENT_LENGTH_BYTES);
}
Document doc = Jsoup.parse(documentBody); if (AcceptableAds.hasAcceptableAdsTag(doc)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.ACCEPTABLE_ADS);
}
if (!metaRobotsTag.allowIndexingByMetaTag(doc)) { if (!metaRobotsTag.allowIndexingByMetaTag(doc)) {
throw new DisqualifiedException(DisqualificationReason.FORBIDDEN); throw new DisqualifiedException(DisqualificationReason.FORBIDDEN);
@@ -138,32 +134,33 @@ public class HtmlDocumentProcessorPlugin extends AbstractDocumentProcessorPlugin
} }
var prunedDoc = specialization.prune(doc); var prunedDoc = specialization.prune(doc);
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
var ret = new ProcessedDocumentDetails();
final int length = getLength(doc); final int length = getLength(doc);
final HtmlStandard standard = getHtmlStandard(doc); final HtmlStandard standard = getHtmlStandard(doc);
final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length); final double quality = documentValuator.getQuality(crawledDocument, standard, doc, length);
if (isDisqualified(documentClass, url, quality, doc.title())) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
DocumentLanguageData dld = sentenceExtractorProvider.get().extractSentences(prunedDoc);
checkDocumentLanguage(dld);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
var ret = new ProcessedDocumentDetails();
ret.length = length; ret.length = length;
ret.standard = standard; ret.standard = standard;
ret.title = specialization.getTitle(doc, dld, crawledDocument.url); ret.title = specialization.getTitle(doc, dld, crawledDocument.url);
documentLengthLogic.validateLength(dld, specialization.lengthModifier() * documentClass.lengthLimitModifier());
final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld); final Set<HtmlFeature> features = featureExtractor.getFeatures(url, doc, documentHeaders, dld);
ret.features = features; ret.features = features;
ret.quality = documentValuator.adjustQuality(quality, features); ret.quality = documentValuator.adjustQuality(quality, features);
ret.hashCode = dld.localitySensitiveHashCode(); ret.hashCode = dld.localitySensitiveHashCode();
if (isDisqualified(documentClass, url, quality, ret.title)) {
throw new DisqualifiedException(DisqualificationReason.QUALITY);
}
PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true); PubDate pubDate = pubDateSniffer.getPubDate(documentHeaders, url, doc, standard, true);
EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type()); EnumSet<DocumentFlags> documentFlags = documentFlags(features, generatorParts.type());

View File

@@ -71,7 +71,7 @@ public class PlainTextDocumentProcessorPlugin extends AbstractDocumentProcessorP
DocumentClass documentClass) DocumentClass documentClass)
throws DisqualifiedException, URISyntaxException { throws DisqualifiedException, URISyntaxException {
String documentBody = crawledDocument.documentBody; String documentBody = crawledDocument.documentBody();
if (languageFilter.isBlockedUnicodeRange(documentBody)) { if (languageFilter.isBlockedUnicodeRange(documentBody)) {
throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE); throw new DisqualifiedException(DisqualifiedException.DisqualificationReason.LANGUAGE);

View File

@@ -19,6 +19,7 @@ import nu.marginalia.model.idx.DocumentMetadata;
import nu.marginalia.model.idx.WordFlags; import nu.marginalia.model.idx.WordFlags;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.List; import java.util.List;
@@ -50,7 +51,7 @@ public class SideloaderProcessing {
"OK", "OK",
"NP", "NP",
"", "",
body, body.getBytes(StandardCharsets.UTF_8),
false, false,
null, null,
null null

View File

@@ -106,11 +106,7 @@ public class WarcSideloader implements SideloadSource, AutoCloseable {
return false; return false;
var url = new EdgeUrl(warcResponse.target()); var url = new EdgeUrl(warcResponse.target());
if (!Objects.equals(url.getDomain(), domain)) { return Objects.equals(url.getDomain(), domain);
return false;
}
return true;
} catch (Exception e) { } catch (Exception e) {
logger.warn("Failed to process response", e); logger.warn("Failed to process response", e);
} }

View File

@@ -39,6 +39,9 @@ public class ConverterWriter implements AutoCloseable {
workerThread.start(); workerThread.start();
} }
/** Queue and eventually write the domain into the converter journal
* The domain object will be closed after it's processed.
* */
public void accept(@Nullable ConverterBatchWritableIf domain) { public void accept(@Nullable ConverterBatchWritableIf domain) {
if (null == domain) if (null == domain)
return; return;
@@ -72,15 +75,15 @@ public class ConverterWriter implements AutoCloseable {
if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) { if (workLog.isItemCommitted(id) || workLog.isItemInCurrentBatch(id)) {
logger.warn("Skipping already logged item {}", id); logger.warn("Skipping already logged item {}", id);
}
else {
currentWriter.write(data);
workLog.logItem(id);
data.close(); data.close();
continue;
} }
currentWriter.write(data);
workLog.logItem(id);
switcher.tick(); switcher.tick();
data.close();
} }
} }
catch (Exception ex) { catch (Exception ex) {

View File

@@ -98,7 +98,7 @@ public class ConvertingIntegrationTest {
@Test @Test
public void testMemexMarginaliaNuSideloadProcessing() throws IOException { public void testMemexMarginaliaNuSideloadProcessing() throws IOException {
var ret = domainProcessor.sideloadProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100); var ret = domainProcessor.simpleProcessing(asSerializableCrawlData(readMarginaliaWorkingSet()), 100);
assertNotNull(ret); assertNotNull(ret);
assertEquals("memex.marginalia.nu", ret.id()); assertEquals("memex.marginalia.nu", ret.id());
@@ -146,7 +146,7 @@ public class ConvertingIntegrationTest {
"OK", "OK",
"", "",
"", "",
readClassPathFile(p.toString()), readClassPathFile(p.toString()).getBytes(),
false, false,
null, null,
null null

View File

@@ -8,6 +8,7 @@ import nu.marginalia.converting.model.ProcessedDomain;
import nu.marginalia.converting.processor.DomainProcessor; import nu.marginalia.converting.processor.DomainProcessor;
import nu.marginalia.crawl.CrawlerMain; import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb; import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcher; import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -200,23 +201,23 @@ public class CrawlingThenConvertingIntegrationTest {
@Test @Test
public void crawlRobotsTxt() throws Exception { public void crawlRobotsTxt() throws Exception {
var specs = new CrawlerMain.CrawlSpecRecord("search.marginalia.nu", 5, var specs = new CrawlerMain.CrawlSpecRecord("marginalia-search.com", 5,
List.of("https://search.marginalia.nu/search?q=hello+world") List.of("https://marginalia-search.com/search?q=hello+world")
); );
CrawledDomain domain = crawl(specs); CrawledDomain domain = crawl(specs);
assertFalse(domain.doc.isEmpty()); assertFalse(domain.doc.isEmpty());
assertEquals("OK", domain.crawlerStatus); assertEquals("OK", domain.crawlerStatus);
assertEquals("search.marginalia.nu", domain.domain); assertEquals("marginalia-search.com", domain.domain);
Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet()); Set<String> allUrls = domain.doc.stream().map(doc -> doc.url).collect(Collectors.toSet());
assertTrue(allUrls.contains("https://search.marginalia.nu/search"), "We expect a record for entities that are forbidden"); assertTrue(allUrls.contains("https://marginalia-search.com/search"), "We expect a record for entities that are forbidden");
var output = process(); var output = process();
assertNotNull(output); assertNotNull(output);
assertFalse(output.documents.isEmpty()); assertFalse(output.documents.isEmpty());
assertEquals(new EdgeDomain("search.marginalia.nu"), output.domain); assertEquals(new EdgeDomain("marginalia-search.com"), output.domain);
assertEquals(DomainIndexingState.ACTIVE, output.state); assertEquals(DomainIndexingState.ACTIVE, output.state);
for (var doc : output.documents) { for (var doc : output.documents) {
@@ -246,7 +247,7 @@ public class CrawlingThenConvertingIntegrationTest {
private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception { private CrawledDomain crawl(CrawlerMain.CrawlSpecRecord specs, Predicate<EdgeDomain> domainBlacklist) throws Exception {
List<SerializableCrawlData> data = new ArrayList<>(); List<SerializableCrawlData> data = new ArrayList<>();
try (var recorder = new WarcRecorder(fileName); try (var recorder = new WarcRecorder(fileName, new Cookies());
var db = new DomainStateDb(dbTempFile)) var db = new DomainStateDb(dbTempFile))
{ {
new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain(); new CrawlerRetreiver(httpFetcher, new DomainProber(domainBlacklist), specs, db, recorder).crawlDomain();

View File

@@ -55,7 +55,6 @@ dependencies {
implementation libs.zstd implementation libs.zstd
implementation libs.jwarc implementation libs.jwarc
implementation libs.crawlercommons implementation libs.crawlercommons
implementation libs.okhttp3
implementation libs.jsoup implementation libs.jsoup
implementation libs.opencsv implementation libs.opencsv
implementation libs.fastutil implementation libs.fastutil

View File

@@ -2,11 +2,16 @@ package nu.marginalia.contenttype;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets;
/** Content type and charset of a document /** Content type and charset of a document
* @param contentType The content type, e.g. "text/html" * @param contentType The content type, e.g. "text/html"
* @param charset The charset, e.g. "UTF-8" * @param charset The charset, e.g. "UTF-8"
*/ */
public record ContentType(String contentType, String charset) { public record ContentType(String contentType, String charset) {
public static ContentType parse(String contentTypeHeader) { public static ContentType parse(String contentTypeHeader) {
if (contentTypeHeader == null || contentTypeHeader.isBlank()) if (contentTypeHeader == null || contentTypeHeader.isBlank())
return new ContentType(null, null); return new ContentType(null, null);
@@ -15,9 +20,31 @@ public record ContentType(String contentType, String charset) {
String contentType = parts[0].trim(); String contentType = parts[0].trim();
String charset = parts.length > 1 ? parts[1].trim() : "UTF-8"; String charset = parts.length > 1 ? parts[1].trim() : "UTF-8";
if (charset.toLowerCase().startsWith("charset=")) {
charset = charset.substring("charset=".length());
}
return new ContentType(contentType, charset); return new ContentType(contentType, charset);
} }
/** Best effort method for turning the provided charset string into a Java charset method,
* with some guesswork-heuristics for when it doesn't work
*/
public Charset asCharset() {
try {
if (Charset.isSupported(charset)) {
return Charset.forName(charset);
} else if (charset.equalsIgnoreCase("macintosh-latin")) {
return StandardCharsets.ISO_8859_1;
} else {
return StandardCharsets.UTF_8;
}
}
catch (IllegalCharsetNameException ex) { // thrown by Charset.isSupported()
return StandardCharsets.UTF_8;
}
}
public boolean is(String contentType) { public boolean is(String contentType) {
return this.contentType.equalsIgnoreCase(contentType); return this.contentType.equalsIgnoreCase(contentType);
} }

View File

@@ -1,9 +1,12 @@
package nu.marginalia.contenttype; package nu.marginalia.contenttype;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset; import java.nio.charset.Charset;
import java.nio.charset.IllegalCharsetNameException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.charset.UnsupportedCharsetException;
import java.util.Map; import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
@@ -23,24 +26,25 @@ public class DocumentBodyToString {
return new String(data, charset); return new String(data, charset);
} }
public static Document getParsedData(ContentType type, byte[] data, int maxLength, String url) throws IOException {
final Charset charset;
if (type.charset() == null || type.charset().isBlank()) {
charset = StandardCharsets.UTF_8;
} else {
charset = charsetMap.computeIfAbsent(type, DocumentBodyToString::computeCharset);
}
ByteArrayInputStream bais = new ByteArrayInputStream(data, 0, Math.min(data.length, maxLength));
return Jsoup.parse(bais, charset.name(), url);
}
private static Charset computeCharset(ContentType type) { private static Charset computeCharset(ContentType type) {
try { if (type.charset() == null || type.charset().isBlank())
if (type.charset() == null || type.charset().isBlank())
return StandardCharsets.UTF_8;
else {
return Charset.forName(type.charset());
}
}
catch (IllegalCharsetNameException ex) {
// Fall back to UTF-8 if we don't understand what this is. It's *probably* fine? Maybe?
return StandardCharsets.UTF_8; return StandardCharsets.UTF_8;
} else {
catch (UnsupportedCharsetException ex) { return type.asCharset();
// This is usually like Macintosh Latin
// (https://en.wikipedia.org/wiki/Macintosh_Latin_encoding)
//
// It's close enough to 8859-1 to serve
return StandardCharsets.ISO_8859_1;
} }
} }
} }

View File

@@ -19,22 +19,19 @@ import nu.marginalia.crawl.retreival.DomainProber;
import nu.marginalia.crawl.warc.WarcArchiverFactory; import nu.marginalia.crawl.warc.WarcArchiverFactory;
import nu.marginalia.crawl.warc.WarcArchiverIf; import nu.marginalia.crawl.warc.WarcArchiverIf;
import nu.marginalia.db.DomainBlacklist; import nu.marginalia.db.DomainBlacklist;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.CrawlerOutputFile; import nu.marginalia.io.CrawlerOutputFile;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.mq.MessageQueueFactory; import nu.marginalia.mq.MessageQueueFactory;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import nu.marginalia.process.ProcessConfiguration; import nu.marginalia.process.ProcessConfiguration;
import nu.marginalia.process.ProcessConfigurationModule; import nu.marginalia.process.ProcessConfigurationModule;
import nu.marginalia.process.ProcessMainClass; import nu.marginalia.process.ProcessMainClass;
import nu.marginalia.process.control.ProcessHeartbeatImpl; import nu.marginalia.process.control.ProcessHeartbeatImpl;
import nu.marginalia.process.log.WorkLog; import nu.marginalia.process.log.WorkLog;
import nu.marginalia.service.module.DatabaseModule; import nu.marginalia.service.module.DatabaseModule;
import nu.marginalia.slop.SlopCrawlDataRecord;
import nu.marginalia.storage.FileStorageService; import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorageId; import nu.marginalia.storage.model.FileStorageId;
import nu.marginalia.util.SimpleBlockingThreadPool; import nu.marginalia.util.SimpleBlockingThreadPool;
import okhttp3.ConnectionPool;
import okhttp3.Dispatcher;
import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -85,6 +82,7 @@ public class CrawlerMain extends ProcessMainClass {
@Inject @Inject
public CrawlerMain(UserAgent userAgent, public CrawlerMain(UserAgent userAgent,
HttpFetcherImpl httpFetcher,
ProcessHeartbeatImpl heartbeat, ProcessHeartbeatImpl heartbeat,
MessageQueueFactory messageQueueFactory, DomainProber domainProber, MessageQueueFactory messageQueueFactory, DomainProber domainProber,
FileStorageService fileStorageService, FileStorageService fileStorageService,
@@ -98,6 +96,7 @@ public class CrawlerMain extends ProcessMainClass {
super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX); super(messageQueueFactory, processConfiguration, gson, CRAWLER_INBOX);
this.userAgent = userAgent; this.userAgent = userAgent;
this.fetcher = httpFetcher;
this.heartbeat = heartbeat; this.heartbeat = heartbeat;
this.domainProber = domainProber; this.domainProber = domainProber;
this.fileStorageService = fileStorageService; this.fileStorageService = fileStorageService;
@@ -111,10 +110,6 @@ public class CrawlerMain extends ProcessMainClass {
Integer.getInteger("crawler.poolSize", 256), Integer.getInteger("crawler.poolSize", 256),
1); 1);
fetcher = new HttpFetcherImpl(userAgent,
new Dispatcher(),
new ConnectionPool(5, 10, TimeUnit.SECONDS)
);
// Wait for the blacklist to be loaded before starting the crawl // Wait for the blacklist to be loaded before starting the crawl
blacklist.waitUntilLoaded(); blacklist.waitUntilLoaded();
@@ -132,6 +127,10 @@ public class CrawlerMain extends ProcessMainClass {
System.setProperty("sun.net.client.defaultConnectTimeout", "30000"); System.setProperty("sun.net.client.defaultConnectTimeout", "30000");
System.setProperty("sun.net.client.defaultReadTimeout", "30000"); System.setProperty("sun.net.client.defaultReadTimeout", "30000");
// Set the maximum number of connections to keep alive in the connection pool
System.setProperty("jdk.httpclient.idleTimeout", "15"); // 15 seconds
System.setProperty("jdk.httpclient.connectionPoolSize", "256");
// We don't want to use too much memory caching sessions for https // We don't want to use too much memory caching sessions for https
System.setProperty("javax.net.ssl.sessionCacheSize", "2048"); System.setProperty("javax.net.ssl.sessionCacheSize", "2048");
@@ -291,7 +290,6 @@ public class CrawlerMain extends ProcessMainClass {
} }
} }
public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception { public void runForSingleDomain(String targetDomainName, FileStorageId fileStorageId) throws Exception {
runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath()); runForSingleDomain(targetDomainName, fileStorageService.getStorage(fileStorageId).asPath());
} }
@@ -353,7 +351,7 @@ public class CrawlerMain extends ProcessMainClass {
Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE); Path newWarcFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.LIVE);
Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP); Path tempFile = CrawlerOutputFile.createWarcPath(outputDir, id, domain, CrawlerOutputFile.WarcFileVersion.TEMP);
Path parquetFile = CrawlerOutputFile.createParquetPath(outputDir, id, domain); Path slopFile = CrawlerOutputFile.createSlopPath(outputDir, id, domain);
// Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data // Move the WARC file to a temp file if it exists, so we can resume the crawl using the old data
// while writing to the same file name as before // while writing to the same file name as before
@@ -364,10 +362,10 @@ public class CrawlerMain extends ProcessMainClass {
Files.deleteIfExists(tempFile); Files.deleteIfExists(tempFile);
} }
try (var warcRecorder = new WarcRecorder(newWarcFile); // write to a temp file for now try (var warcRecorder = new WarcRecorder(newWarcFile, fetcher); // write to a temp file for now
var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder); var retriever = new CrawlerRetreiver(fetcher, domainProber, specification, domainStateDb, warcRecorder);
CrawlDataReference reference = getReference(); CrawlDataReference reference = getReference()
) )
{ {
// Resume the crawl if it was aborted // Resume the crawl if it was aborted
if (Files.exists(tempFile)) { if (Files.exists(tempFile)) {
@@ -387,15 +385,15 @@ public class CrawlerMain extends ProcessMainClass {
reference.delete(); reference.delete();
// Convert the WARC file to Parquet // Convert the WARC file to Parquet
CrawledDocumentParquetRecordFileWriter SlopCrawlDataRecord
.convertWarc(domain, userAgent, newWarcFile, parquetFile); .convertWarc(domain, userAgent, newWarcFile, slopFile);
// Optionally archive the WARC file if full retention is enabled, // Optionally archive the WARC file if full retention is enabled,
// otherwise delete it: // otherwise delete it:
warcArchiver.consumeWarc(newWarcFile, domain); warcArchiver.consumeWarc(newWarcFile, domain);
// Mark the domain as finished in the work log // Mark the domain as finished in the work log
workLog.setJobToFinished(domain, parquetFile.toString(), size); workLog.setJobToFinished(domain, slopFile.toString(), size);
// Update the progress bar // Update the progress bar
heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks); heartbeat.setProgress(tasksDone.incrementAndGet() / (double) totalTasks);
@@ -416,11 +414,22 @@ public class CrawlerMain extends ProcessMainClass {
private CrawlDataReference getReference() { private CrawlDataReference getReference() {
try { try {
return new CrawlDataReference(CrawledDomainReader.createDataStream(outputDir, domain, id)); Path slopPath = CrawlerOutputFile.getSlopPath(outputDir, id, domain);
if (Files.exists(slopPath)) {
return new CrawlDataReference(slopPath);
}
Path parquetPath = CrawlerOutputFile.getParquetPath(outputDir, id, domain);
if (Files.exists(parquetPath)) {
slopPath = migrateParquetData(parquetPath, domain, outputDir);
return new CrawlDataReference(slopPath);
}
} catch (IOException e) { } catch (IOException e) {
logger.debug("Failed to read previous crawl data for {}", specification.domain()); logger.debug("Failed to read previous crawl data for {}", specification.domain());
return new CrawlDataReference();
} }
return new CrawlDataReference();
} }
} }
@@ -480,4 +489,20 @@ public class CrawlerMain extends ProcessMainClass {
} }
} }
} }
// Migrate from parquet to slop if necessary
//
// This must be synchronized as chewing through parquet files in parallel leads to enormous memory overhead
private synchronized Path migrateParquetData(Path inputPath, String domain, Path crawlDataRoot) throws IOException {
if (!inputPath.endsWith(".parquet")) {
return inputPath;
}
Path outputFile = CrawlerOutputFile.createSlopPath(crawlDataRoot, Integer.toHexString(domain.hashCode()), domain);
SlopCrawlDataRecord.convertFromParquet(inputPath, outputFile);
return outputFile;
}
} }

View File

@@ -9,6 +9,7 @@ import java.sql.Connection;
import java.sql.DriverManager; import java.sql.DriverManager;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.Instant; import java.time.Instant;
import java.util.Objects;
import java.util.Optional; import java.util.Optional;
/** Supplemental sqlite database for storing the summary of a crawl. /** Supplemental sqlite database for storing the summary of a crawl.
@@ -60,6 +61,8 @@ public class DomainStateDb implements AutoCloseable {
} }
public record FaviconRecord(String contentType, byte[] imageData) {}
public DomainStateDb(Path filename) throws SQLException { public DomainStateDb(Path filename) throws SQLException {
String sqliteDbString = "jdbc:sqlite:" + filename.toString(); String sqliteDbString = "jdbc:sqlite:" + filename.toString();
connection = DriverManager.getConnection(sqliteDbString); connection = DriverManager.getConnection(sqliteDbString);
@@ -74,7 +77,13 @@ public class DomainStateDb implements AutoCloseable {
feedUrl TEXT feedUrl TEXT
) )
"""); """);
stmt.executeUpdate("""
CREATE TABLE IF NOT EXISTS favicon (
domain TEXT PRIMARY KEY,
contentType TEXT NOT NULL,
icon BLOB NOT NULL
)
""");
stmt.execute("PRAGMA journal_mode=WAL"); stmt.execute("PRAGMA journal_mode=WAL");
} }
} }
@@ -85,6 +94,41 @@ public class DomainStateDb implements AutoCloseable {
} }
public void saveIcon(String domain, FaviconRecord faviconRecord) {
try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO favicon (domain, contentType, icon)
VALUES(?, ?, ?)
""")) {
stmt.setString(1, domain);
stmt.setString(2, Objects.requireNonNullElse(faviconRecord.contentType, "application/octet-stream"));
stmt.setBytes(3, faviconRecord.imageData);
stmt.executeUpdate();
}
catch (SQLException ex) {
logger.error("Failed to insert favicon", ex);
}
}
public Optional<FaviconRecord> getIcon(String domain) {
try (var stmt = connection.prepareStatement("SELECT contentType, icon FROM favicon WHERE DOMAIN = ?")) {
stmt.setString(1, domain);
var rs = stmt.executeQuery();
if (rs.next()) {
return Optional.of(
new FaviconRecord(
rs.getString("contentType"),
rs.getBytes("icon")
)
);
}
} catch (SQLException e) {
logger.error("Failed to retrieve favicon", e);
}
return Optional.empty();
}
public void save(SummaryRecord record) { public void save(SummaryRecord record) {
try (var stmt = connection.prepareStatement(""" try (var stmt = connection.prepareStatement("""
INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl) INSERT OR REPLACE INTO summary (domain, lastUpdatedEpochMs, state, stateDesc, feedUrl)

View File

@@ -1,6 +1,6 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import okhttp3.Request; import java.net.http.HttpRequest;
/** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */ /** Encapsulates request modifiers; the ETag and Last-Modified tags for a resource */
public record ContentTags(String etag, String lastMod) { public record ContentTags(String etag, String lastMod) {
@@ -17,14 +17,14 @@ public record ContentTags(String etag, String lastMod) {
} }
/** Paints the tags onto the request builder. */ /** Paints the tags onto the request builder. */
public void paint(Request.Builder getBuilder) { public void paint(HttpRequest.Builder getBuilder) {
if (etag != null) { if (etag != null) {
getBuilder.addHeader("If-None-Match", etag); getBuilder.header("If-None-Match", etag);
} }
if (lastMod != null) { if (lastMod != null) {
getBuilder.addHeader("If-Modified-Since", lastMod); getBuilder.header("If-Modified-Since", lastMod);
} }
} }
} }

View File

@@ -1,33 +1,14 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import okhttp3.Cookie; import java.io.IOException;
import okhttp3.CookieJar; import java.net.CookieHandler;
import okhttp3.HttpUrl; import java.net.URI;
import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
public class Cookies { public class Cookies extends CookieHandler {
final ThreadLocal<ConcurrentHashMap<String, List<Cookie>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new); final ThreadLocal<ConcurrentHashMap<String, List<String>>> cookieJar = ThreadLocal.withInitial(ConcurrentHashMap::new);
public CookieJar getJar() {
return new CookieJar() {
@Override
public void saveFromResponse(HttpUrl url, List<Cookie> cookies) {
if (!cookies.isEmpty()) {
cookieJar.get().put(url.host(), cookies);
}
}
@Override
public List<Cookie> loadForRequest(HttpUrl url) {
return cookieJar.get().getOrDefault(url.host(), Collections.emptyList());
}
};
}
public void clear() { public void clear() {
cookieJar.get().clear(); cookieJar.get().clear();
@@ -38,6 +19,16 @@ public class Cookies {
} }
public List<String> getCookies() { public List<String> getCookies() {
return cookieJar.get().values().stream().flatMap(List::stream).map(Cookie::toString).toList(); return cookieJar.get().values().stream().flatMap(List::stream).toList();
}
@Override
public Map<String, List<String>> get(URI uri, Map<String, List<String>> requestHeaders) throws IOException {
return cookieJar.get();
}
@Override
public void put(URI uri, Map<String, List<String>> responseHeaders) throws IOException {
cookieJar.get().putAll(responseHeaders);
} }
} }

View File

@@ -3,6 +3,7 @@ package nu.marginalia.crawl.fetcher;
import com.google.inject.ImplementedBy; import com.google.inject.ImplementedBy;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
@@ -11,10 +12,10 @@ import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import java.util.List; import java.util.List;
@ImplementedBy(HttpFetcherImpl.class) @ImplementedBy(HttpFetcherImpl.class)
public interface HttpFetcher { public interface HttpFetcher extends AutoCloseable {
void setAllowAllContentTypes(boolean allowAllContentTypes); void setAllowAllContentTypes(boolean allowAllContentTypes);
List<String> getCookies(); Cookies getCookies();
void clearCookies(); void clearCookies();
DomainProbeResult probeDomain(EdgeUrl url); DomainProbeResult probeDomain(EdgeUrl url);
@@ -27,7 +28,9 @@ public interface HttpFetcher {
HttpFetchResult fetchContent(EdgeUrl url, HttpFetchResult fetchContent(EdgeUrl url,
WarcRecorder recorder, WarcRecorder recorder,
ContentTags tags, ContentTags tags,
ProbeType probeType) throws HttpFetcherImpl.RateLimitException, Exception; ProbeType probeType) throws Exception;
List<EdgeUrl> fetchSitemapUrls(String rootSitemapUrl, CrawlDelayTimer delayTimer);
SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder); SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder);

View File

@@ -1,35 +1,39 @@
package nu.marginalia.crawl.fetcher; package nu.marginalia.crawl.fetcher;
import com.google.inject.Inject; import com.google.inject.Inject;
import com.google.inject.Singleton;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import crawlercommons.robots.SimpleRobotRulesParser; import crawlercommons.robots.SimpleRobotRulesParser;
import nu.marginalia.UserAgent; import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.socket.FastTerminatingSocketFactory;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.crawl.fetcher.socket.NoSecuritySSL; import nu.marginalia.crawl.fetcher.socket.NoSecuritySSL;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.ContentTypeLogic; import nu.marginalia.model.body.ContentTypeLogic;
import nu.marginalia.model.body.DocumentBodyExtractor; import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import okhttp3.ConnectionPool; import org.jsoup.Jsoup;
import okhttp3.Dispatcher; import org.jsoup.nodes.Document;
import okhttp3.OkHttpClient; import org.jsoup.parser.Parser;
import okhttp3.Request;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import javax.net.ssl.X509TrustManager; import java.io.IOException;
import java.io.InterruptedIOException; import java.io.InputStream;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpTimeoutException;
import java.time.Duration; import java.time.Duration;
import java.util.List; import java.util.*;
import java.util.Objects; import java.util.concurrent.Executors;
import java.util.Optional; import java.util.zip.GZIPInputStream;
import java.util.concurrent.TimeUnit;
@Singleton
public class HttpFetcherImpl implements HttpFetcher { public class HttpFetcherImpl implements HttpFetcher {
private final Logger logger = LoggerFactory.getLogger(getClass()); private final Logger logger = LoggerFactory.getLogger(getClass());
@@ -40,39 +44,28 @@ public class HttpFetcherImpl implements HttpFetcher {
private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser(); private static final SimpleRobotRulesParser robotsParser = new SimpleRobotRulesParser();
private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic(); private static final ContentTypeLogic contentTypeLogic = new ContentTypeLogic();
private final Duration requestTimeout = Duration.ofSeconds(10);
@Override @Override
public void setAllowAllContentTypes(boolean allowAllContentTypes) { public void setAllowAllContentTypes(boolean allowAllContentTypes) {
contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes); contentTypeLogic.setAllowAllContentTypes(allowAllContentTypes);
} }
private final OkHttpClient client; private final HttpClient client;
private static final FastTerminatingSocketFactory ftSocketFactory = new FastTerminatingSocketFactory();
private OkHttpClient createClient(Dispatcher dispatcher, ConnectionPool pool) {
var builder = new OkHttpClient.Builder();
if (dispatcher != null) {
builder.dispatcher(dispatcher);
}
return builder.sslSocketFactory(NoSecuritySSL.buildSocketFactory(), (X509TrustManager) NoSecuritySSL.trustAllCerts[0])
.socketFactory(ftSocketFactory)
.hostnameVerifier(NoSecuritySSL.buildHostnameVerifyer())
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor())
.connectionPool(pool)
.cookieJar(cookies.getJar())
.followRedirects(true)
.followSslRedirects(true)
.connectTimeout(8, TimeUnit.SECONDS)
.readTimeout(10, TimeUnit.SECONDS)
.writeTimeout(10, TimeUnit.SECONDS)
.build();
private HttpClient createClient() {
return HttpClient.newBuilder()
.sslContext(NoSecuritySSL.buildSslContext())
.cookieHandler(cookies)
.followRedirects(HttpClient.Redirect.NORMAL)
.connectTimeout(Duration.ofSeconds(8))
.executor(Executors.newCachedThreadPool())
.build();
} }
@Override @Override
public List<String> getCookies() { public Cookies getCookies() {
return cookies.getCookies(); return cookies;
} }
@Override @Override
@@ -81,26 +74,24 @@ public class HttpFetcherImpl implements HttpFetcher {
} }
@Inject @Inject
public HttpFetcherImpl(UserAgent userAgent, public HttpFetcherImpl(UserAgent userAgent)
Dispatcher dispatcher,
ConnectionPool connectionPool)
{ {
this.client = createClient(dispatcher, connectionPool); this.client = createClient();
this.userAgentString = userAgent.uaString(); this.userAgentString = userAgent.uaString();
this.userAgentIdentifier = userAgent.uaIdentifier(); this.userAgentIdentifier = userAgent.uaIdentifier();
} }
public HttpFetcherImpl(String userAgent) { public HttpFetcherImpl(String userAgent) {
this.client = createClient(null, new ConnectionPool()); this.client = createClient();
this.userAgentString = userAgent; this.userAgentString = userAgent;
this.userAgentIdentifier = userAgent; this.userAgentIdentifier = userAgent;
} }
// Not necessary in prod, but useful in test // Not necessary in prod, but useful in test
public void close() { public void close() {
client.dispatcher().executorService().shutdown(); client.close();
client.connectionPool().evictAll();
} }
/** /**
* Probe the domain to see if it is reachable, attempting to identify which schema to use, * Probe the domain to see if it is reachable, attempting to identify which schema to use,
* and if there are any redirects. This is done by one or more HEAD requests. * and if there are any redirects. This is done by one or more HEAD requests.
@@ -110,19 +101,26 @@ public class HttpFetcherImpl implements HttpFetcher {
*/ */
@Override @Override
public DomainProbeResult probeDomain(EdgeUrl url) { public DomainProbeResult probeDomain(EdgeUrl url) {
var head = new Request.Builder().head().addHeader("User-agent", userAgentString) HttpRequest head;
.url(url.toString()) try {
.build(); head = HttpRequest.newBuilder()
.HEAD()
.uri(url.asURI())
.header("User-agent", userAgentString)
.timeout(requestTimeout)
.build();
} catch (URISyntaxException e) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, "Invalid URL");
}
var call = client.newCall(head); try {
var rsp = client.send(head, HttpResponse.BodyHandlers.discarding());
EdgeUrl rspUri = new EdgeUrl(rsp.uri());
try (var rsp = call.execute()) { if (!Objects.equals(rspUri.domain, url.domain)) {
EdgeUrl requestUrl = new EdgeUrl(rsp.request().url().toString()); return new DomainProbeResult.Redirect(rspUri.domain);
if (!Objects.equals(requestUrl.domain, url.domain)) {
return new DomainProbeResult.Redirect(requestUrl.domain);
} }
return new DomainProbeResult.Ok(requestUrl); return new DomainProbeResult.Ok(rspUri);
} }
catch (Exception ex) { catch (Exception ex) {
return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, ex.getMessage()); return new DomainProbeResult.Error(CrawlerDomainStatus.ERROR, ex.getMessage());
@@ -140,21 +138,25 @@ public class HttpFetcherImpl implements HttpFetcher {
WarcRecorder warcRecorder, WarcRecorder warcRecorder,
ContentTags tags) throws RateLimitException { ContentTags tags) throws RateLimitException {
if (tags.isEmpty() && contentTypeLogic.isUrlLikeBinary(url)) { if (tags.isEmpty() && contentTypeLogic.isUrlLikeBinary(url)) {
var headBuilder = new Request.Builder().head()
.addHeader("User-agent", userAgentString)
.addHeader("Accept-Encoding", "gzip")
.url(url.toString());
var head = headBuilder.build(); try {
var call = client.newCall(head); var headBuilder = HttpRequest.newBuilder()
.HEAD()
.uri(url.asURI())
.header("User-agent", userAgentString)
.header("Accept-Encoding", "gzip")
.timeout(requestTimeout)
;
try (var rsp = call.execute()) { var rsp = client.send(headBuilder.build(), HttpResponse.BodyHandlers.discarding());
var contentTypeHeader = rsp.header("Content-type"); var headers = rsp.headers();
var contentTypeHeader = headers.firstValue("Content-Type").orElse(null);
if (contentTypeHeader != null && !contentTypeLogic.isAllowableContentType(contentTypeHeader)) { if (contentTypeHeader != null && !contentTypeLogic.isAllowableContentType(contentTypeHeader)) {
warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.code()); warcRecorder.flagAsFailedContentTypeProbe(url, contentTypeHeader, rsp.statusCode());
return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.code()); return new ContentTypeProbeResult.BadContentType(contentTypeHeader, rsp.statusCode());
} }
// Update the URL to the final URL of the HEAD request, otherwise we might end up doing // Update the URL to the final URL of the HEAD request, otherwise we might end up doing
@@ -168,27 +170,27 @@ public class HttpFetcherImpl implements HttpFetcher {
// too many eyebrows when looking at the logs on the target server. Overall it's probably desirable // too many eyebrows when looking at the logs on the target server. Overall it's probably desirable
// that it looks like the traffic makes sense, as opposed to looking like a broken bot. // that it looks like the traffic makes sense, as opposed to looking like a broken bot.
var redirectUrl = new EdgeUrl(rsp.request().url().toString()); var redirectUrl = new EdgeUrl(rsp.uri());
EdgeUrl ret; EdgeUrl ret;
if (Objects.equals(redirectUrl.domain, url.domain)) ret = redirectUrl; if (Objects.equals(redirectUrl.domain, url.domain)) ret = redirectUrl;
else ret = url; else ret = url;
// Intercept rate limiting // Intercept rate limiting
if (rsp.code() == 429) { if (rsp.statusCode() == 429) {
throw new HttpFetcherImpl.RateLimitException(Objects.requireNonNullElse(rsp.header("Retry-After"), "1")); throw new HttpFetcherImpl.RateLimitException(headers.firstValue("Retry-After").orElse("1"));
} }
return new ContentTypeProbeResult.Ok(ret); return new ContentTypeProbeResult.Ok(ret);
} }
catch (HttpTimeoutException ex) {
warcRecorder.flagAsTimeout(url);
return new ContentTypeProbeResult.Timeout(ex);
}
catch (RateLimitException ex) { catch (RateLimitException ex) {
throw ex; throw ex;
} }
catch (InterruptedIOException ex) { catch (Exception ex) {
warcRecorder.flagAsTimeout(url);
return new ContentTypeProbeResult.Timeout(ex);
} catch (Exception ex) {
logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage()); logger.error("Error during fetching {}[{}]", ex.getClass().getSimpleName(), ex.getMessage());
warcRecorder.flagAsError(url, ex); warcRecorder.flagAsError(url, ex);
@@ -210,13 +212,15 @@ public class HttpFetcherImpl implements HttpFetcher {
ProbeType probeType) ProbeType probeType)
throws Exception throws Exception
{ {
var getBuilder = new Request.Builder().get(); var getBuilder = HttpRequest.newBuilder()
.GET()
getBuilder.url(url.toString()) .uri(url.asURI())
.addHeader("Accept-Encoding", "gzip") .header("User-agent", userAgentString)
.addHeader("Accept-Language", "en,*;q=0.5") .header("Accept-Encoding", "gzip")
.addHeader("Accept", "text/html, application/xhtml+xml, text/*;q=0.8") .header("Accept-Language", "en,*;q=0.5")
.addHeader("User-agent", userAgentString); .header("Accept", "text/html, application/xhtml+xml, text/*;q=0.8")
.timeout(requestTimeout)
;
contentTags.paint(getBuilder); contentTags.paint(getBuilder);
@@ -242,6 +246,126 @@ public class HttpFetcherImpl implements HttpFetcher {
return new SitemapRetriever(); return new SitemapRetriever();
} }
@Override
public List<EdgeUrl> fetchSitemapUrls(String root, CrawlDelayTimer delayTimer) {
try {
List<EdgeUrl> ret = new ArrayList<>();
Set<String> seenUrls = new HashSet<>();
Set<String> seenSitemaps = new HashSet<>();
Deque<EdgeUrl> sitemapQueue = new LinkedList<>();
EdgeUrl rootSitemapUrl = new EdgeUrl(root);
sitemapQueue.add(rootSitemapUrl);
int fetchedSitemaps = 0;
while (!sitemapQueue.isEmpty() && ret.size() < 20_000 && ++fetchedSitemaps < 10) {
var head = sitemapQueue.removeFirst();
switch (fetchSitemap(head)) {
case SitemapResult.SitemapUrls(List<String> urls) -> {
for (var url : urls) {
if (seenUrls.add(url)) {
EdgeUrl.parse(url)
.filter(u -> u.domain.equals(rootSitemapUrl.domain))
.ifPresent(ret::add);
}
}
}
case SitemapResult.SitemapReferences(List<String> refs) -> {
for (var ref : refs) {
if (seenSitemaps.add(ref)) {
EdgeUrl.parse(ref)
.filter(url -> url.domain.equals(rootSitemapUrl.domain))
.ifPresent(sitemapQueue::addFirst);
}
}
}
case SitemapResult.SitemapError() -> {}
}
delayTimer.waitFetchDelay();
}
return ret;
}
catch (Exception ex) {
logger.error("Error while fetching sitemaps via {}: {} ({})", root, ex.getClass().getSimpleName(), ex.getMessage());
return List.of();
}
}
private SitemapResult fetchSitemap(EdgeUrl sitemapUrl) throws URISyntaxException, IOException, InterruptedException {
HttpRequest getRequest = HttpRequest.newBuilder()
.GET()
.uri(sitemapUrl.asURI())
.header("Accept-Encoding", "gzip")
.header("Accept", "text/*, */*;q=0.9")
.header("User-agent", userAgentString)
.timeout(requestTimeout)
.build();
var response = client.send(getRequest, HttpResponse.BodyHandlers.ofInputStream());
if (response.statusCode() != 200) {
return new SitemapResult.SitemapError();
}
try (InputStream inputStream = response.body()) {
InputStream parserStream;
if (sitemapUrl.path.endsWith(".gz")) {
parserStream = new GZIPInputStream(inputStream);
}
else {
parserStream = inputStream;
}
Document parsedSitemap = Jsoup.parse(parserStream, "UTF-8", sitemapUrl.toString(), Parser.xmlParser());
if (parsedSitemap.childrenSize() == 0) {
return new SitemapResult.SitemapError();
}
String rootTagName = parsedSitemap.child(0).tagName();
return switch (rootTagName.toLowerCase()) {
case "sitemapindex" -> {
List<String> references = new ArrayList<>();
for (var locTag : parsedSitemap.getElementsByTag("loc")) {
references.add(locTag.text().trim());
}
yield new SitemapResult.SitemapReferences(Collections.unmodifiableList(references));
}
case "urlset" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("url > loc")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
case "rss", "atom" -> {
List<String> urls = new ArrayList<>();
for (var locTag : parsedSitemap.select("link, url")) {
urls.add(locTag.text().trim());
}
yield new SitemapResult.SitemapUrls(Collections.unmodifiableList(urls));
}
default -> new SitemapResult.SitemapError();
};
}
}
private sealed interface SitemapResult {
record SitemapUrls(List<String> urls) implements SitemapResult {}
record SitemapReferences(List<String> sitemapRefs) implements SitemapResult {}
record SitemapError() implements SitemapResult {}
}
@Override @Override
public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) { public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) {
var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder); var ret = fetchAndParseRobotsTxt(new EdgeUrl("https", domain, null, "/robots.txt", null), recorder);
@@ -257,14 +381,15 @@ public class HttpFetcherImpl implements HttpFetcher {
private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) { private Optional<SimpleRobotRules> fetchAndParseRobotsTxt(EdgeUrl url, WarcRecorder recorder) {
try { try {
var getBuilder = new Request.Builder().get(); var getRequest = HttpRequest.newBuilder()
.GET()
.uri(url.asURI())
.header("Accept-Encoding", "gzip")
.header("Accept", "text/*, */*;q=0.9")
.header("User-agent", userAgentString)
.timeout(requestTimeout);
getBuilder.url(url.toString()) HttpFetchResult result = recorder.fetch(client, getRequest.build());
.addHeader("Accept-Encoding", "gzip")
.addHeader("Accept", "text/*, */*;q=0.9")
.addHeader("User-agent", userAgentString);
HttpFetchResult result = recorder.fetch(client, getBuilder.build());
return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) -> return DocumentBodyExtractor.asBytes(result).mapOpt((contentType, body) ->
robotsParser.parseContent(url.toString(), robotsParser.parseContent(url.toString(),

View File

@@ -1,31 +0,0 @@
package nu.marginalia.crawl.fetcher.socket;
import okhttp3.Interceptor;
import okhttp3.Response;
import org.jetbrains.annotations.NotNull;
import java.io.IOException;
/** An interceptor that intercepts network requests and adds the remote IP address as
* a header in the response. This is used to pass the remote IP address to the Warc
* writer, as this information is not available in the response.
*/
public class IpInterceptingNetworkInterceptor implements Interceptor {
private static final String pseudoHeaderName = "X-Marginalia-Remote-IP";
@NotNull
@Override
public Response intercept(@NotNull Interceptor.Chain chain) throws IOException {
String IP = chain.connection().socket().getInetAddress().getHostAddress();
return chain.proceed(chain.request())
.newBuilder()
.addHeader(pseudoHeaderName, IP)
.build();
}
public static String getIpFromResponse(Response response) {
return response.header(pseudoHeaderName);
}
}

View File

@@ -27,7 +27,7 @@ public class NoSecuritySSL {
} }
}; };
public static SSLSocketFactory buildSocketFactory() { public static SSLContext buildSslContext() {
try { try {
// Install the all-trusting trust manager // Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("TLS"); final SSLContext sslContext = SSLContext.getInstance("TLS");
@@ -40,14 +40,11 @@ public class NoSecuritySSL {
clientSessionContext.setSessionCacheSize(2048); clientSessionContext.setSessionCacheSize(2048);
// Create a ssl socket factory with our all-trusting manager // Create a ssl socket factory with our all-trusting manager
return sslContext.getSocketFactory(); return sslContext;
} }
catch (Exception e) { catch (Exception e) {
throw new RuntimeException(e); throw new RuntimeException(e);
} }
} }
public static HostnameVerifier buildHostnameVerifyer() {
return (hn, session) -> true;
}
} }

View File

@@ -1,14 +1,14 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Headers;
import okhttp3.Response;
import org.apache.commons.io.input.BOMInputStream; import org.apache.commons.io.input.BOMInputStream;
import org.netpreserve.jwarc.WarcTruncationReason; import org.netpreserve.jwarc.WarcTruncationReason;
import java.io.*; import java.io.*;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.Objects; import java.util.Map;
import java.util.zip.GZIPInputStream; import java.util.zip.GZIPInputStream;
/** Input buffer for temporary storage of a HTTP response /** Input buffer for temporary storage of a HTTP response
@@ -17,8 +17,9 @@ import java.util.zip.GZIPInputStream;
* */ * */
public abstract class WarcInputBuffer implements AutoCloseable { public abstract class WarcInputBuffer implements AutoCloseable {
protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED; protected WarcTruncationReason truncationReason = WarcTruncationReason.NOT_TRUNCATED;
protected Headers headers; protected HttpHeaders headers;
WarcInputBuffer(Headers headers) {
WarcInputBuffer(HttpHeaders headers) {
this.headers = headers; this.headers = headers;
} }
@@ -30,7 +31,7 @@ public abstract class WarcInputBuffer implements AutoCloseable {
public final WarcTruncationReason truncationReason() { return truncationReason; } public final WarcTruncationReason truncationReason() { return truncationReason; }
public final Headers headers() { return headers; } public final HttpHeaders headers() { return headers; }
/** Create a buffer for a response. /** Create a buffer for a response.
* If the response is small and not compressed, it will be stored in memory. * If the response is small and not compressed, it will be stored in memory.
@@ -38,26 +39,27 @@ public abstract class WarcInputBuffer implements AutoCloseable {
* and suppressed from the headers. * and suppressed from the headers.
* If an error occurs, a buffer will be created with no content and an error status. * If an error occurs, a buffer will be created with no content and an error status.
*/ */
static WarcInputBuffer forResponse(Response rsp) { static WarcInputBuffer forResponse(HttpResponse<InputStream> rsp) {
if (rsp == null) if (rsp == null)
return new ErrorBuffer(); return new ErrorBuffer();
try { var headers = rsp.headers();
String contentLengthHeader = Objects.requireNonNullElse(rsp.header("Content-Length"), "-1");
int contentLength = Integer.parseInt(contentLengthHeader); try (var is = rsp.body()) {
String contentEncoding = rsp.header("Content-Encoding"); int contentLength = (int) headers.firstValueAsLong("Content-Length").orElse(-1L);
String contentEncoding = headers.firstValue("Content-Encoding").orElse(null);
if (contentEncoding == null && contentLength > 0 && contentLength < 8192) { if (contentEncoding == null && contentLength > 0 && contentLength < 8192) {
// If the content is small and not compressed, we can just read it into memory // If the content is small and not compressed, we can just read it into memory
return new MemoryBuffer(rsp, contentLength); return new MemoryBuffer(headers, is, contentLength);
} }
else { else {
// Otherwise, we unpack it into a file and read it from there // Otherwise, we unpack it into a file and read it from there
return new FileBuffer(rsp); return new FileBuffer(headers, is);
} }
} }
catch (Exception ex) { catch (Exception ex) {
return new ErrorBuffer(rsp); return new ErrorBuffer();
} }
} }
@@ -99,12 +101,8 @@ public abstract class WarcInputBuffer implements AutoCloseable {
/** Pseudo-buffer for when we have an error */ /** Pseudo-buffer for when we have an error */
class ErrorBuffer extends WarcInputBuffer { class ErrorBuffer extends WarcInputBuffer {
public ErrorBuffer() { public ErrorBuffer() {
super(Headers.of()); super(HttpHeaders.of(Map.of(), (k,v)->false));
truncationReason = WarcTruncationReason.UNSPECIFIED;
}
public ErrorBuffer(Response rsp) {
super(rsp.headers());
truncationReason = WarcTruncationReason.UNSPECIFIED; truncationReason = WarcTruncationReason.UNSPECIFIED;
} }
@@ -125,12 +123,12 @@ class ErrorBuffer extends WarcInputBuffer {
/** Buffer for when we have the response in memory */ /** Buffer for when we have the response in memory */
class MemoryBuffer extends WarcInputBuffer { class MemoryBuffer extends WarcInputBuffer {
byte[] data; byte[] data;
public MemoryBuffer(Response response, int size) { public MemoryBuffer(HttpHeaders headers, InputStream responseStream, int size) {
super(response.headers()); super(headers);
var outputStream = new ByteArrayOutputStream(size); var outputStream = new ByteArrayOutputStream(size);
copy(response.body().byteStream(), outputStream); copy(responseStream, outputStream);
data = outputStream.toByteArray(); data = outputStream.toByteArray();
} }
@@ -154,19 +152,15 @@ class MemoryBuffer extends WarcInputBuffer {
class FileBuffer extends WarcInputBuffer { class FileBuffer extends WarcInputBuffer {
private final Path tempFile; private final Path tempFile;
public FileBuffer(Response response) throws IOException { public FileBuffer(HttpHeaders headers, InputStream responseStream) throws IOException {
super(suppressContentEncoding(response.headers())); super(suppressContentEncoding(headers));
this.tempFile = Files.createTempFile("rsp", ".html"); this.tempFile = Files.createTempFile("rsp", ".html");
if (response.body() == null) {
truncationReason = WarcTruncationReason.DISCONNECT;
return;
}
if ("gzip".equals(response.header("Content-Encoding"))) { if ("gzip".equalsIgnoreCase(headers.firstValue("Content-Encoding").orElse(""))) {
try (var out = Files.newOutputStream(tempFile)) { try (var out = Files.newOutputStream(tempFile)) {
copy(new GZIPInputStream(response.body().byteStream()), out); copy(new GZIPInputStream(responseStream), out);
} }
catch (Exception ex) { catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED; truncationReason = WarcTruncationReason.UNSPECIFIED;
@@ -174,7 +168,7 @@ class FileBuffer extends WarcInputBuffer {
} }
else { else {
try (var out = Files.newOutputStream(tempFile)) { try (var out = Files.newOutputStream(tempFile)) {
copy(response.body().byteStream(), out); copy(responseStream, out);
} }
catch (Exception ex) { catch (Exception ex) {
truncationReason = WarcTruncationReason.UNSPECIFIED; truncationReason = WarcTruncationReason.UNSPECIFIED;
@@ -182,22 +176,13 @@ class FileBuffer extends WarcInputBuffer {
} }
} }
private static Headers suppressContentEncoding(Headers headers) { private static HttpHeaders suppressContentEncoding(HttpHeaders headers) {
var builder = new Headers.Builder(); return HttpHeaders.of(headers.map(), (k, v) -> {
headers.toMultimap().forEach((k, values) -> {
if ("Content-Encoding".equalsIgnoreCase(k)) { if ("Content-Encoding".equalsIgnoreCase(k)) {
return; return false;
}
if ("Transfer-Encoding".equalsIgnoreCase(k)) {
return;
}
for (var value : values) {
builder.add(k, value);
} }
return !"Transfer-Encoding".equalsIgnoreCase(k);
}); });
return builder.build();
} }

View File

@@ -1,11 +1,12 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import okhttp3.Protocol;
import okhttp3.Response;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import java.net.URI; import java.net.URI;
import java.net.URLEncoder; import java.net.URLEncoder;
import java.net.http.HttpClient;
import java.net.http.HttpHeaders;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.util.*; import java.util.*;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -75,13 +76,13 @@ public class WarcProtocolReconstructor {
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n"; return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
} }
static String getResponseHeader(Response response, long size) { static String getResponseHeader(HttpResponse<?> response, long size) {
String version = response.protocol() == Protocol.HTTP_1_1 ? "1.1" : "2.0"; String version = response.version() == HttpClient.Version.HTTP_1_1 ? "1.1" : "2.0";
String statusCode = String.valueOf(response.code()); String statusCode = String.valueOf(response.statusCode());
String statusMessage = STATUS_CODE_MAP.getOrDefault(response.code(), "Unknown"); String statusMessage = STATUS_CODE_MAP.getOrDefault(response.statusCode(), "Unknown");
String headerString = getHeadersAsString(response, size); String headerString = getHeadersAsString(response.headers(), size);
return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n"; return "HTTP/" + version + " " + statusCode + " " + statusMessage + "\r\n" + headerString + "\r\n\r\n";
} }
@@ -148,10 +149,10 @@ public class WarcProtocolReconstructor {
return joiner.toString(); return joiner.toString();
} }
static private String getHeadersAsString(Response response, long responseSize) { static private String getHeadersAsString(HttpHeaders headers, long responseSize) {
StringJoiner joiner = new StringJoiner("\r\n"); StringJoiner joiner = new StringJoiner("\r\n");
response.headers().toMultimap().forEach((k, values) -> { headers.map().forEach((k, values) -> {
String headerCapitalized = capitalizeHeader(k); String headerCapitalized = capitalizeHeader(k);
// Omit pseudoheaders injected by the crawler itself // Omit pseudoheaders injected by the crawler itself
@@ -179,8 +180,8 @@ public class WarcProtocolReconstructor {
return joiner.toString(); return joiner.toString();
} }
// okhttp gives us flattened headers, so we need to reconstruct Camel-Kebab-Case style // okhttp gave us flattened headers, so we need to reconstruct Camel-Kebab-Case style
// for the WARC parser's sake... // for the WARC parser's sake... (do we still need this, mr chesterton?)
static private String capitalizeHeader(String k) { static private String capitalizeHeader(String k) {
return Arrays.stream(StringUtils.split(k, '-')) return Arrays.stream(StringUtils.split(k, '-'))
.map(StringUtils::capitalize) .map(StringUtils::capitalize)

View File

@@ -1,13 +1,11 @@
package nu.marginalia.crawl.fetcher.warc; package nu.marginalia.crawl.fetcher.warc;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.netpreserve.jwarc.*; import org.netpreserve.jwarc.*;
import org.slf4j.Logger; import org.slf4j.Logger;
@@ -18,16 +16,19 @@ import java.io.InputStream;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.URI; import java.net.URI;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
import java.time.Duration;
import java.time.Instant; import java.time.Instant;
import java.util.*; import java.util.*;
/** Based on JWarc's fetch method, APL 2.0 license /** Based on JWarc's fetch method, APL 2.0 license
* <p></p> * <p></p>
* This class wraps OkHttp's OkHttpClient and records the HTTP request and response in a WARC file, * This class wraps HttpClient and records the HTTP request and response in a WARC file,
* as best is possible given not all the data is available at the same time and needs to * as best is possible given not all the data is available at the same time and needs to
* be reconstructed. * be reconstructed.
*/ */
@@ -47,20 +48,22 @@ public class WarcRecorder implements AutoCloseable {
// Affix a version string in case we need to change the format in the future // Affix a version string in case we need to change the format in the future
// in some way // in some way
private final String warcRecorderVersion = "1.0"; private final String warcRecorderVersion = "1.0";
private final Cookies cookies;
// We need to know if the site uses cookies so this can be reported among the search results
// -- flip this to true if we see any cookies. This information will also be painted on any
// revisited pages. It's not 100% perfect and a bit order dependent, but it's good enough.
private final WarcXCookieInformationHeader cookieInformation = new WarcXCookieInformationHeader();
/** /**
* Create a new WarcRecorder that will write to the given file * Create a new WarcRecorder that will write to the given file
* *
* @param warcFile The file to write to * @param warcFile The file to write to
*/ */
public WarcRecorder(Path warcFile) throws IOException { public WarcRecorder(Path warcFile, HttpFetcherImpl fetcher) throws IOException {
this.warcFile = warcFile; this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile); this.writer = new WarcWriter(warcFile);
this.cookies = fetcher.getCookies();
}
public WarcRecorder(Path warcFile, Cookies cookies) throws IOException {
this.warcFile = warcFile;
this.writer = new WarcWriter(warcFile);
this.cookies = cookies;
} }
/** /**
@@ -70,36 +73,45 @@ public class WarcRecorder implements AutoCloseable {
public WarcRecorder() throws IOException { public WarcRecorder() throws IOException {
this.warcFile = Files.createTempFile("warc", ".warc.gz"); this.warcFile = Files.createTempFile("warc", ".warc.gz");
this.writer = new WarcWriter(this.warcFile); this.writer = new WarcWriter(this.warcFile);
this.cookies = new Cookies();
temporaryFile = true; temporaryFile = true;
} }
public HttpFetchResult fetch(OkHttpClient client, Request request) throws NoSuchAlgorithmException, public HttpFetchResult fetch(HttpClient client,
IOException, java.net.http.HttpRequest request)
URISyntaxException, throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException
InterruptedException
{ {
URI requestUri = request.url().uri(); URI requestUri = request.uri();
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
String ip;
Instant date = Instant.now(); Instant date = Instant.now();
var call = client.newCall(request); // Not entirely sure why we need to do this, but keeping it due to Chesterton's Fence
Map<String, List<String>> extraHeaders = new HashMap<>(request.headers().map());
cookieInformation.update(client, request.url()); HttpResponse<InputStream> response;
try {
response = client.send(request, java.net.http.HttpResponse.BodyHandlers.ofInputStream());
}
catch (Exception ex) {
logger.warn("Failed to fetch URL {}: {}", requestUri, ex.getMessage());
return new HttpFetchResult.ResultException(ex);
}
try (var response = call.execute();
WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response)) try (WarcInputBuffer inputBuffer = WarcInputBuffer.forResponse(response);
InputStream inputStream = inputBuffer.read())
{ {
if (cookies.hasCookies()) {
extraHeaders.put("X-Has-Cookies", List.of("1"));
}
byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8); byte[] responseHeaders = WarcProtocolReconstructor.getResponseHeader(response, inputBuffer.size()).getBytes(StandardCharsets.UTF_8);
ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length); ResponseDataBuffer responseDataBuffer = new ResponseDataBuffer(inputBuffer.size() + responseHeaders.length);
InputStream inputStream = inputBuffer.read();
ip = IpInterceptingNetworkInterceptor.getIpFromResponse(response);
responseDataBuffer.put(responseHeaders); responseDataBuffer.put(responseHeaders);
responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length); responseDataBuffer.updateDigest(responseDigestBuilder, 0, responseHeaders.length);
@@ -123,17 +135,15 @@ public class WarcRecorder implements AutoCloseable {
// It looks like this might be the same as requestUri, but it's not; // It looks like this might be the same as requestUri, but it's not;
// it's the URI after resolving redirects. // it's the URI after resolving redirects.
final URI responseUri = response.request().url().uri(); final URI responseUri = response.uri();
WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri) WarcResponse.Builder responseBuilder = new WarcResponse.Builder(responseUri)
.blockDigest(responseDigestBuilder.build()) .blockDigest(responseDigestBuilder.build())
.date(date) .date(date)
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes()); .body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(responseBuilder); InetAddress inetAddress = InetAddress.getByName(responseUri.getHost());
responseBuilder.ipAddress(inetAddress);
if (ip != null) responseBuilder.ipAddress(InetAddress.getByName(ip));
responseBuilder.payloadDigest(payloadDigestBuilder.build()); responseBuilder.payloadDigest(payloadDigestBuilder.build());
responseBuilder.truncated(inputBuffer.truncationReason()); responseBuilder.truncated(inputBuffer.truncationReason());
@@ -150,8 +160,8 @@ public class WarcRecorder implements AutoCloseable {
byte[] httpRequestString = WarcProtocolReconstructor byte[] httpRequestString = WarcProtocolReconstructor
.getHttpRequestString( .getHttpRequestString(
response.request().method(), response.request().method(),
response.request().headers().toMultimap(), response.request().headers().map(),
request.headers().toMultimap(), extraHeaders,
requestUri) requestUri)
.getBytes(); .getBytes();
@@ -167,10 +177,29 @@ public class WarcRecorder implements AutoCloseable {
warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it warcRequest.http(); // force HTTP header to be parsed before body is consumed so that caller can use it
writer.write(warcRequest); writer.write(warcRequest);
if (Duration.between(date, Instant.now()).compareTo(Duration.ofSeconds(9)) > 0
&& inputBuffer.size() < 2048
&& !request.uri().getPath().endsWith("robots.txt")) // don't bail on robots.txt
{
// Fast detection and mitigation of crawler traps that respond with slow
// small responses, with a high branching factor
// Note we bail *after* writing the warc records, this will effectively only
// prevent link extraction from the document.
logger.warn("URL {} took too long to fetch ({}s) and was too small for the effort ({}b)",
requestUri,
Duration.between(date, Instant.now()).getSeconds(),
inputBuffer.size()
);
return new HttpFetchResult.ResultException(new IOException("Likely crawler trap"));
}
return new HttpFetchResult.ResultOk(responseUri, return new HttpFetchResult.ResultOk(responseUri,
response.code(), response.statusCode(),
inputBuffer.headers(), inputBuffer.headers(),
ip, inetAddress.getHostAddress(),
responseDataBuffer.data, responseDataBuffer.data,
dataStart, dataStart,
responseDataBuffer.length() - dataStart); responseDataBuffer.length() - dataStart);
@@ -185,7 +214,7 @@ public class WarcRecorder implements AutoCloseable {
writer.write(item); writer.write(item);
} }
private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags contentTags) { private void saveOldResponse(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags contentTags) {
try { try {
WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder responseDigestBuilder = new WarcDigestBuilder();
WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder(); WarcDigestBuilder payloadDigestBuilder = new WarcDigestBuilder();
@@ -195,7 +224,7 @@ public class WarcRecorder implements AutoCloseable {
if (documentBody == null) { if (documentBody == null) {
bytes = new byte[0]; bytes = new byte[0];
} else { } else {
bytes = documentBody.getBytes(); bytes = documentBody;
} }
// Create a synthesis of custom headers and the original headers // Create a synthesis of custom headers and the original headers
@@ -246,7 +275,9 @@ public class WarcRecorder implements AutoCloseable {
.date(Instant.now()) .date(Instant.now())
.body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes()); .body(MediaType.HTTP_RESPONSE, responseDataBuffer.copyBytes());
cookieInformation.paint(builder); if (cookies.hasCookies()) {
builder.addHeader("X-Has-Cookies", "1");
}
var reference = builder.build(); var reference = builder.build();
@@ -264,7 +295,7 @@ public class WarcRecorder implements AutoCloseable {
* an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this * an E-Tag or Last-Modified header, and the server responds with a 304 Not Modified. In this
* scenario we want to record the data as it was in the previous crawl, but not re-fetch it. * scenario we want to record the data as it was in the previous crawl, but not re-fetch it.
*/ */
public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, String documentBody, @Nullable String headers, ContentTags ctags) { public void writeReferenceCopy(EdgeUrl url, String contentType, int statusCode, byte[] documentBody, @Nullable String headers, ContentTags ctags) {
saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags); saveOldResponse(url, contentType, statusCode, documentBody, headers, ctags);
} }

View File

@@ -4,6 +4,7 @@ import nu.marginalia.ContentTypes;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.lsh.EasyLSH; import nu.marginalia.lsh.EasyLSH;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import org.jetbrains.annotations.NotNull;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -11,54 +12,76 @@ import javax.annotation.Nullable;
import java.io.IOException; import java.io.IOException;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.Iterator;
import java.util.Objects;
import java.util.Optional;
/** A reference to a domain that has been crawled before. */ /** A reference to a domain that has been crawled before. */
public class CrawlDataReference implements AutoCloseable { public class CrawlDataReference implements AutoCloseable, Iterable<CrawledDocument> {
private boolean closed = false;
@Nullable
private final Path path;
@Nullable
private SerializableCrawlDataStream data = null;
private final SerializableCrawlDataStream data;
private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class); private static final Logger logger = LoggerFactory.getLogger(CrawlDataReference.class);
public CrawlDataReference(SerializableCrawlDataStream data) { public CrawlDataReference(@Nullable Path path) {
this.data = data; this.path = path;
} }
public CrawlDataReference() { public CrawlDataReference() {
this(SerializableCrawlDataStream.empty()); this(null);
} }
/** Delete the associated data from disk, if it exists */ /** Delete the associated data from disk, if it exists */
public void delete() throws IOException { public void delete() throws IOException {
Path filePath = data.path(); if (path != null) {
Files.deleteIfExists(path);
if (filePath != null) {
Files.deleteIfExists(filePath);
} }
} }
/** Get the next document from the crawl data, public @NotNull Iterator<CrawledDocument> iterator() {
* returning null when there are no more documents
* available
*/
@Nullable
public CrawledDocument nextDocument() {
try {
while (data.hasNext()) {
if (data.next() instanceof CrawledDocument doc) {
if (!ContentTypes.isAccepted(doc.contentType))
continue;
return doc; requireStream();
// Guaranteed by requireStream, but helps java
Objects.requireNonNull(data);
return data.map(next -> {
if (next instanceof CrawledDocument doc && ContentTypes.isAccepted(doc.contentType)) {
return Optional.of(doc);
}
else {
return Optional.empty();
}
});
}
/** After calling this method, data is guaranteed to be non-null */
private void requireStream() {
if (closed) {
throw new IllegalStateException("Use after close()");
}
if (data == null) {
try {
if (path != null) {
data = SerializableCrawlDataStream.openDataStream(path);
return;
} }
} }
} catch (Exception ex) {
catch (IOException ex) { logger.error("Failed to open stream", ex);
logger.error("Failed to read next document", ex); }
}
return null; data = SerializableCrawlDataStream.empty();
}
} }
public static boolean isContentBodySame(String one, String other) { public static boolean isContentBodySame(byte[] one, byte[] other) {
final long contentHashOne = contentHash(one); final long contentHashOne = contentHash(one);
final long contentHashOther = contentHash(other); final long contentHashOther = contentHash(other);
@@ -66,7 +89,7 @@ public class CrawlDataReference implements AutoCloseable {
return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4; return EasyLSH.hammingDistance(contentHashOne, contentHashOther) < 4;
} }
private static long contentHash(String content) { private static long contentHash(byte[] content) {
EasyLSH hash = new EasyLSH(); EasyLSH hash = new EasyLSH();
int next = 0; int next = 0;
@@ -74,8 +97,8 @@ public class CrawlDataReference implements AutoCloseable {
// In a naive best-effort fashion, extract the text // In a naive best-effort fashion, extract the text
// content of the document and feed it into the LSH // content of the document and feed it into the LSH
for (int i = 0; i < content.length(); i++) { for (byte b : content) {
char c = content.charAt(i); char c = (char) b;
if (c == '<') { if (c == '<') {
isInTag = true; isInTag = true;
} else if (c == '>') { } else if (c == '>') {
@@ -98,7 +121,12 @@ public class CrawlDataReference implements AutoCloseable {
} }
@Override @Override
public void close() throws Exception { public void close() throws IOException {
data.close(); if (!closed) {
if (data != null) {
data.close();
}
closed = true;
}
} }
} }

View File

@@ -12,7 +12,6 @@ import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.logic.LinkFilterSelector; import nu.marginalia.crawl.logic.LinkFilterSelector;
import nu.marginalia.crawl.retreival.revisit.CrawlerRevisitor; import nu.marginalia.crawl.retreival.revisit.CrawlerRevisitor;
import nu.marginalia.crawl.retreival.revisit.DocumentWithReference; import nu.marginalia.crawl.retreival.revisit.DocumentWithReference;
import nu.marginalia.crawl.retreival.sitemap.SitemapFetcher;
import nu.marginalia.ip_blocklist.UrlBlocklist; import nu.marginalia.ip_blocklist.UrlBlocklist;
import nu.marginalia.link_parser.LinkParser; import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
@@ -20,7 +19,6 @@ import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.DocumentBodyExtractor; import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawlerDomainStatus; import nu.marginalia.model.crawldata.CrawlerDomainStatus;
import org.jsoup.Jsoup;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -53,7 +51,6 @@ public class CrawlerRetreiver implements AutoCloseable {
private final WarcRecorder warcRecorder; private final WarcRecorder warcRecorder;
private final CrawlerRevisitor crawlerRevisitor; private final CrawlerRevisitor crawlerRevisitor;
private final SitemapFetcher sitemapFetcher;
int errorCount = 0; int errorCount = 0;
public CrawlerRetreiver(HttpFetcher fetcher, public CrawlerRetreiver(HttpFetcher fetcher,
@@ -71,7 +68,6 @@ public class CrawlerRetreiver implements AutoCloseable {
crawlFrontier = new DomainCrawlFrontier(new EdgeDomain(domain), specs.urls(), specs.crawlDepth()); crawlFrontier = new DomainCrawlFrontier(new EdgeDomain(domain), specs.urls(), specs.crawlDepth());
crawlerRevisitor = new CrawlerRevisitor(crawlFrontier, this, warcRecorder); crawlerRevisitor = new CrawlerRevisitor(crawlFrontier, this, warcRecorder);
sitemapFetcher = new SitemapFetcher(crawlFrontier, fetcher.createSitemapRetriever());
// We must always crawl the index page first, this is assumed when fingerprinting the server // We must always crawl the index page first, this is assumed when fingerprinting the server
var fst = crawlFrontier.peek(); var fst = crawlFrontier.peek();
@@ -93,30 +89,45 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
public int crawlDomain(DomainLinks domainLinks, CrawlDataReference oldCrawlData) { public int crawlDomain(DomainLinks domainLinks, CrawlDataReference oldCrawlData) {
try { try (oldCrawlData) {
// Do an initial domain probe to determine the root URL // Do an initial domain probe to determine the root URL
EdgeUrl rootUrl;
var probeResult = probeRootUrl(); var probeResult = probeRootUrl();
switch (probeResult) {
return switch (probeResult) {
case HttpFetcher.DomainProbeResult.Ok(EdgeUrl probedUrl) -> { case HttpFetcher.DomainProbeResult.Ok(EdgeUrl probedUrl) -> {
rootUrl = probedUrl; // Good track
// Sleep after the initial probe, we don't have access to the robots.txt yet
// so we don't know the crawl delay
TimeUnit.SECONDS.sleep(1);
final SimpleRobotRules robotsRules = fetcher.fetchRobotRules(probedUrl.domain, warcRecorder);
final CrawlDelayTimer delayTimer = new CrawlDelayTimer(robotsRules.getCrawlDelay());
delayTimer.waitFetchDelay(0); // initial delay after robots.txt
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(probedUrl, delayTimer);
domainStateDb.save(summaryRecord);
// Play back the old crawl data (if present) and fetch the documents comparing etags and last-modified
if (crawlerRevisitor.recrawl(oldCrawlData, robotsRules, delayTimer) > 0) {
// If we have reference data, we will always grow the crawl depth a bit
crawlFrontier.increaseDepth(1.5, 2500);
}
oldCrawlData.close(); // proactively close the crawl data reference here to not hold onto expensive resources
yield crawlDomain(probedUrl, robotsRules, delayTimer, domainLinks);
} }
case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> { case HttpFetcher.DomainProbeResult.Redirect(EdgeDomain domain1) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString())); domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, "Redirect", domain1.toString()));
return 1; yield 1;
} }
case HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus status, String desc) -> { case HttpFetcher.DomainProbeResult.Error(CrawlerDomainStatus status, String desc) -> {
domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc)); domainStateDb.save(DomainStateDb.SummaryRecord.forError(domain, status.toString(), desc));
return 1; yield 1;
} }
} };
// Sleep after the initial probe, we don't have access to the robots.txt yet
// so we don't know the crawl delay
TimeUnit.SECONDS.sleep(1);
return crawlDomain(oldCrawlData, rootUrl, domainLinks);
} }
catch (Exception ex) { catch (Exception ex) {
logger.error("Error crawling domain {}", domain, ex); logger.error("Error crawling domain {}", domain, ex);
@@ -124,30 +135,19 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
} }
private int crawlDomain(CrawlDataReference oldCrawlData, private int crawlDomain(EdgeUrl rootUrl,
EdgeUrl rootUrl, SimpleRobotRules robotsRules,
DomainLinks domainLinks) throws InterruptedException { CrawlDelayTimer delayTimer,
DomainLinks domainLinks) {
final SimpleRobotRules robotsRules = fetcher.fetchRobotRules(rootUrl.domain, warcRecorder);
final CrawlDelayTimer delayTimer = new CrawlDelayTimer(robotsRules.getCrawlDelay());
delayTimer.waitFetchDelay(0); // initial delay after robots.txt
DomainStateDb.SummaryRecord summaryRecord = sniffRootDocument(rootUrl, delayTimer);
domainStateDb.save(summaryRecord);
// Play back the old crawl data (if present) and fetch the documents comparing etags and last-modified
if (crawlerRevisitor.recrawl(oldCrawlData, robotsRules, delayTimer) > 0) {
// If we have reference data, we will always grow the crawl depth a bit
crawlFrontier.increaseDepth(1.5, 2500);
}
// Add external links to the crawl frontier // Add external links to the crawl frontier
crawlFrontier.addAllToQueue(domainLinks.getUrls(rootUrl.proto)); crawlFrontier.addAllToQueue(domainLinks.getUrls(rootUrl.proto));
// Add links from the sitemap to the crawl frontier // Fetch sitemaps
sitemapFetcher.downloadSitemaps(robotsRules, rootUrl); for (var sitemap : robotsRules.getSitemaps()) {
crawlFrontier.addAllToQueue(fetcher.fetchSitemapUrls(sitemap, delayTimer));
}
while (!crawlFrontier.isEmpty() while (!crawlFrontier.isEmpty()
&& !crawlFrontier.isCrawlDepthReached() && !crawlFrontier.isCrawlDepthReached()
@@ -271,13 +271,19 @@ public class CrawlerRetreiver implements AutoCloseable {
} }
// Download the sitemap if available // Download the sitemap if available
if (feedLink.isPresent()) { feedLink.ifPresent(s -> fetcher.fetchSitemapUrls(s, timer));
sitemapFetcher.downloadSitemaps(List.of(feedLink.get()));
timer.waitFetchDelay(0);
}
// Grab the favicon if it exists // Grab the favicon if it exists
fetchWithRetry(faviconUrl, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty());
if (fetchWithRetry(faviconUrl, timer, HttpFetcher.ProbeType.DISABLED, ContentTags.empty()) instanceof HttpFetchResult.ResultOk iconResult) {
String contentType = iconResult.header("Content-Type");
byte[] iconData = iconResult.getBodyBytes();
domainStateDb.saveIcon(
domain,
new DomainStateDb.FaviconRecord(contentType, iconData)
);
}
timer.waitFetchDelay(0); timer.waitFetchDelay(0);
} }
@@ -382,14 +388,14 @@ public class CrawlerRetreiver implements AutoCloseable {
else if (fetchedDoc instanceof HttpFetchResult.Result304Raw && reference.doc() != null) { else if (fetchedDoc instanceof HttpFetchResult.Result304Raw && reference.doc() != null) {
var doc = reference.doc(); var doc = reference.doc();
warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBody, doc.headers, contentTags); warcRecorder.writeReferenceCopy(top, doc.contentType, doc.httpStatus, doc.documentBodyBytes, doc.headers, contentTags);
fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url, fetchedDoc = new HttpFetchResult.Result304ReplacedWithReference(doc.url,
new ContentType(doc.contentType, "UTF-8"), new ContentType(doc.contentType, "UTF-8"),
doc.documentBody); doc.documentBodyBytes);
if (doc.documentBody != null) { if (doc.documentBodyBytes != null) {
var parsed = Jsoup.parse(doc.documentBody); var parsed = doc.parseBody();
crawlFrontier.enqueueLinksFromDocument(top, parsed); crawlFrontier.enqueueLinksFromDocument(top, parsed);
crawlFrontier.addVisited(top); crawlFrontier.addVisited(top);

View File

@@ -1,6 +1,5 @@
package nu.marginalia.crawl.retreival.revisit; package nu.marginalia.crawl.retreival.revisit;
import com.google.common.base.Strings;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -11,7 +10,8 @@ import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import org.jsoup.Jsoup;
import java.io.IOException;
/** This class encapsulates the logic for re-visiting a domain that has already been crawled. /** This class encapsulates the logic for re-visiting a domain that has already been crawled.
* We may use information from the previous crawl to inform the next crawl, specifically the * We may use information from the previous crawl to inform the next crawl, specifically the
@@ -40,18 +40,12 @@ public class CrawlerRevisitor {
int errors = 0; int errors = 0;
int skipped = 0; int skipped = 0;
for (;;) { for (CrawledDocument doc : oldCrawlData) {
if (errors > 20) { if (errors > 20) {
// If we've had too many errors, we'll stop trying to recrawl // If we've had too many errors, we'll stop trying to recrawl
break; break;
} }
CrawledDocument doc = oldCrawlData.nextDocument();
if (doc == null)
break;
// This Shouldn't Happen (TM)
var urlMaybe = EdgeUrl.parse(doc.url); var urlMaybe = EdgeUrl.parse(doc.url);
if (urlMaybe.isEmpty()) if (urlMaybe.isEmpty())
continue; continue;
@@ -70,7 +64,7 @@ public class CrawlerRevisitor {
// unlikely to produce anything meaningful for us. // unlikely to produce anything meaningful for us.
if (doc.httpStatus != 200) if (doc.httpStatus != 200)
continue; continue;
if (Strings.isNullOrEmpty(doc.documentBody)) if (!doc.hasBody())
continue; continue;
if (!crawlFrontier.filterLink(url)) if (!crawlFrontier.filterLink(url))
@@ -117,14 +111,19 @@ public class CrawlerRevisitor {
// fashion to make sure we eventually catch changes over time // fashion to make sure we eventually catch changes over time
// and ensure we discover new links // and ensure we discover new links
// Hoover up any links from the document try {
crawlFrontier.enqueueLinksFromDocument(url, Jsoup.parse(doc.documentBody)); // Hoover up any links from the document
crawlFrontier.enqueueLinksFromDocument(url, doc.parseBody());
}
catch (IOException ex) {
//
}
// Add a WARC record so we don't repeat this // Add a WARC record so we don't repeat this
warcRecorder.writeReferenceCopy(url, warcRecorder.writeReferenceCopy(url,
doc.contentType, doc.contentType,
doc.httpStatus, doc.httpStatus,
doc.documentBody, doc.documentBodyBytes,
doc.headers, doc.headers,
new ContentTags(doc.etagMaybe, doc.lastModifiedMaybe) new ContentTags(doc.etagMaybe, doc.lastModifiedMaybe)
); );

View File

@@ -2,8 +2,6 @@ package nu.marginalia.crawl.retreival.revisit;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.retreival.CrawlDataReference; import nu.marginalia.crawl.retreival.CrawlDataReference;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.DocumentBodyResult;
import nu.marginalia.model.body.HttpFetchResult; import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
@@ -35,21 +33,17 @@ public record DocumentWithReference(
return false; return false;
if (doc == null) if (doc == null)
return false; return false;
if (doc.documentBody == null) if (doc.documentBodyBytes.length == 0)
return false; return false;
if (!(DocumentBodyExtractor.asString(resultOk) instanceof DocumentBodyResult.Ok<String> bodyOk)) { return CrawlDataReference.isContentBodySame(doc.documentBodyBytes, resultOk.bytesRaw());
return false;
}
return CrawlDataReference.isContentBodySame(doc.documentBody, bodyOk.body());
} }
public ContentTags getContentTags() { public ContentTags getContentTags() {
if (null == doc) if (null == doc)
return ContentTags.empty(); return ContentTags.empty();
if (doc.documentBody == null || doc.httpStatus != 200) if (doc.documentBodyBytes.length == 0 || doc.httpStatus != 200)
return ContentTags.empty(); return ContentTags.empty();
String lastmod = doc.getLastModified(); String lastmod = doc.getLastModified();

View File

@@ -1,72 +0,0 @@
package nu.marginalia.crawl.retreival.sitemap;
import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.fetcher.SitemapRetriever;
import nu.marginalia.crawl.retreival.DomainCrawlFrontier;
import nu.marginalia.model.EdgeUrl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.HashSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
public class SitemapFetcher {
private final DomainCrawlFrontier crawlFrontier;
private final SitemapRetriever sitemapRetriever;
private static final Logger logger = LoggerFactory.getLogger(SitemapFetcher.class);
public SitemapFetcher(DomainCrawlFrontier crawlFrontier, SitemapRetriever sitemapRetriever) {
this.crawlFrontier = crawlFrontier;
this.sitemapRetriever = sitemapRetriever;
}
public void downloadSitemaps(SimpleRobotRules robotsRules, EdgeUrl rootUrl) {
List<String> urls = robotsRules.getSitemaps();
if (urls.isEmpty()) {
urls = List.of(rootUrl.withPathAndParam("/sitemap.xml", null).toString());
}
downloadSitemaps(urls);
}
public void downloadSitemaps(List<String> urls) {
Set<String> checkedSitemaps = new HashSet<>();
for (var rawUrl : urls) {
Optional<EdgeUrl> parsedUrl = EdgeUrl.parse(rawUrl);
if (parsedUrl.isEmpty()) {
continue;
}
EdgeUrl url = parsedUrl.get();
// Let's not download sitemaps from other domains for now
if (!crawlFrontier.isSameDomain(url)) {
continue;
}
if (checkedSitemaps.contains(url.path))
continue;
var sitemap = sitemapRetriever.fetchSitemap(url);
if (sitemap.isEmpty()) {
continue;
}
// ensure we don't try to download this sitemap again
// (don't move this up, as we may want to check the same
// path with different protocols until we find one that works)
checkedSitemaps.add(url.path);
crawlFrontier.addAllToQueue(sitemap);
}
logger.debug("Queue is now {}", crawlFrontier.queueSize());
}
}

View File

@@ -32,11 +32,11 @@ dependencies {
implementation libs.bundles.parquet implementation libs.bundles.parquet
implementation libs.trove implementation libs.trove
implementation libs.slop
implementation libs.jwarc implementation libs.jwarc
implementation libs.gson implementation libs.gson
implementation libs.commons.io implementation libs.commons.io
implementation libs.commons.lang3 implementation libs.commons.lang3
implementation libs.okhttp3
implementation libs.jsoup implementation libs.jsoup
implementation libs.snakeyaml implementation libs.snakeyaml
implementation libs.zstd implementation libs.zstd

View File

@@ -1,45 +0,0 @@
package nu.marginalia.io;
import nu.marginalia.io.crawldata.format.ParquetSerializableCrawlDataStream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
public class CrawledDomainReader {
private static final Logger logger = LoggerFactory.getLogger(CrawledDomainReader.class);
/** An iterator-like access to domain data This must be closed otherwise it will leak off-heap memory! */
public static SerializableCrawlDataStream createDataStream(Path fullPath) throws IOException
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
try {
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
} else {
logger.error("Unknown file type: {}", fullPath);
return SerializableCrawlDataStream.empty();
}
}
/** An iterator-like access to domain data. This must be closed otherwise it will leak off-heap memory! */
public static SerializableCrawlDataStream createDataStream(Path basePath, String domain, String id) throws IOException {
Path parquetPath = CrawlerOutputFile.getParquetPath(basePath, id, domain);
if (Files.exists(parquetPath)) {
return createDataStream(parquetPath);
}
else {
throw new FileNotFoundException("No such file: " + parquetPath);
}
}
}

View File

@@ -35,7 +35,7 @@ public class CrawlerOutputFile {
return destDir.resolve(id + "-" + filesystemSafeName(domain) + "-" + version.suffix + ".warc.gz"); return destDir.resolve(id + "-" + filesystemSafeName(domain) + "-" + version.suffix + ".warc.gz");
} }
public static Path createParquetPath(Path basePath, String id, String domain) throws IOException { public static Path createSlopPath(Path basePath, String id, String domain) throws IOException {
id = padId(id); id = padId(id);
String first = id.substring(0, 2); String first = id.substring(0, 2);
@@ -45,8 +45,9 @@ public class CrawlerOutputFile {
if (!Files.exists(destDir)) { if (!Files.exists(destDir)) {
Files.createDirectories(destDir); Files.createDirectories(destDir);
} }
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet"); return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".slop.zip");
} }
public static Path getParquetPath(Path basePath, String id, String domain) { public static Path getParquetPath(Path basePath, String id, String domain) {
id = padId(id); id = padId(id);
@@ -56,16 +57,18 @@ public class CrawlerOutputFile {
Path destDir = basePath.resolve(first).resolve(second); Path destDir = basePath.resolve(first).resolve(second);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet"); return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".parquet");
} }
public static Path getWarcPath(Path basePath, String id, String domain, WarcFileVersion version) {
public static Path getSlopPath(Path basePath, String id, String domain) {
id = padId(id); id = padId(id);
String first = id.substring(0, 2); String first = id.substring(0, 2);
String second = id.substring(2, 4); String second = id.substring(2, 4);
Path destDir = basePath.resolve(first).resolve(second); Path destDir = basePath.resolve(first).resolve(second);
return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".warc" + version.suffix); return destDir.resolve(id + "-" + filesystemSafeName(domain) + ".slop.zip");
} }
/** /**
* Pads the given ID with leading zeros to ensure it has a length of 4 characters. * Pads the given ID with leading zeros to ensure it has a length of 4 characters.
*/ */

View File

@@ -1,35 +1,120 @@
package nu.marginalia.io; package nu.marginalia.io;
import nu.marginalia.io.crawldata.format.ParquetSerializableCrawlDataStream;
import nu.marginalia.io.crawldata.format.SlopSerializableCrawlDataStream;
import nu.marginalia.model.crawldata.CrawledDocument; import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawledDomain; import nu.marginalia.model.crawldata.CrawledDomain;
import nu.marginalia.model.crawldata.SerializableCrawlData; import nu.marginalia.model.crawldata.SerializableCrawlData;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Iterator; import java.util.Iterator;
import java.util.List; import java.util.List;
import java.util.Optional;
import java.util.function.Function;
/** Closable iterator exceptional over serialized crawl data /** Closable iterator exceptional over serialized crawl data
* The data may appear in any order, and the iterator must be closed. * The data may appear in any order, and the iterator must be closed.
* *
* @see CrawledDomainReader
* */ * */
public interface SerializableCrawlDataStream extends AutoCloseable { public interface SerializableCrawlDataStream extends AutoCloseable {
Logger logger = LoggerFactory.getLogger(SerializableCrawlDataStream.class);
SerializableCrawlData next() throws IOException; SerializableCrawlData next() throws IOException;
/** Return a size hint for the stream. 0 is returned if the hint is not available, /** Return a size hint for the stream. 0 is returned if the hint is not available,
* or if the file is seemed too small to bother */ * or if the file is seemed too small to bother */
default int sizeHint() { return 0; } default int getSizeHint() { return 0; }
boolean hasNext() throws IOException; boolean hasNext() throws IOException;
@Nullable @Nullable
default Path path() { return null; } default Path path() { return null; }
void close() throws IOException;
/** An iterator-like access to domain data This must be closed otherwise it will leak off-heap memory! */
static SerializableCrawlDataStream openDataStream(Path fullPath) throws IOException
{
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
try {
return new ParquetSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
if (fileName.endsWith(".slop.zip")) {
try {
return new SlopSerializableCrawlDataStream(fullPath);
} catch (Exception ex) {
logger.error("Error reading domain data from " + fullPath, ex);
return SerializableCrawlDataStream.empty();
}
}
logger.error("Unknown file type: {}", fullPath);
return SerializableCrawlDataStream.empty();
}
/** Get an idication of the size of the stream. This is used to determine whether to
* load the stream into memory or not. 0 is returned if the hint is not available,
* or if the file is seemed too small to bother */
static int getSizeHint(Path fullPath) {
String fileName = fullPath.getFileName().toString();
if (fileName.endsWith(".parquet")) {
return ParquetSerializableCrawlDataStream.sizeHint(fullPath);
}
else if (fileName.endsWith(".slop.zip")) {
return SlopSerializableCrawlDataStream.sizeHint(fullPath);
}
else {
return 0;
}
}
default <T> Iterator<T> map(Function<SerializableCrawlData, Optional<T>> mapper) {
return new Iterator<>() {
T next = null;
public boolean hasNext() {
if (next != null)
return true;
try {
while (SerializableCrawlDataStream.this.hasNext()) {
var val = mapper.apply(SerializableCrawlDataStream.this.next());
if (val.isPresent()) {
next = val.get();
return true;
}
}
}
catch (IOException ex) {
logger.error("Error during stream", ex);
}
return false;
}
public T next() {
if (next == null && !hasNext())
throw new IllegalStateException("No more data to read");
T ret = next;
next = null;
return ret;
}
};
}
/** For tests */ /** For tests */
default List<SerializableCrawlData> asList() throws IOException { default List<SerializableCrawlData> asList() throws IOException {
List<SerializableCrawlData> data = new ArrayList<>(); List<SerializableCrawlData> data = new ArrayList<>();
@@ -81,7 +166,6 @@ public interface SerializableCrawlDataStream extends AutoCloseable {
public boolean hasNext() { return iterator.hasNext(); } public boolean hasNext() { return iterator.hasNext(); }
public void close() {} public void close() {}
}; };
} }
} }

View File

@@ -1,7 +1,6 @@
package nu.marginalia.io.crawldata.format; package nu.marginalia.io.crawldata.format;
import nu.marginalia.contenttype.ContentType; import nu.marginalia.contenttype.ContentType;
import nu.marginalia.contenttype.DocumentBodyToString;
import nu.marginalia.hash.MurmurHash3_128; import nu.marginalia.hash.MurmurHash3_128;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
@@ -18,6 +17,7 @@ import java.nio.file.Path;
import java.util.*; import java.util.*;
import java.util.stream.Stream; import java.util.stream.Stream;
@Deprecated
public class ParquetSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream { public class ParquetSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream {
private static final Logger logger = LoggerFactory.getLogger(ParquetSerializableCrawlDataStream.class); private static final Logger logger = LoggerFactory.getLogger(ParquetSerializableCrawlDataStream.class);
@@ -40,7 +40,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
return path; return path;
} }
public int sizeHint() { public static int sizeHint(Path path) {
// Only calculate size hint for large files // Only calculate size hint for large files
// (the reason we calculate them in the first place is to assess whether it is large // (the reason we calculate them in the first place is to assess whether it is large
// because it has many documents, or because it is a small number of large documents) // because it has many documents, or because it is a small number of large documents)
@@ -124,9 +124,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
} }
else if (nextRecord.body != null) { else if (nextRecord.body != null) {
try { try {
bodyString = DocumentBodyToString.getStringData( ContentType.parse(nextRecord.contentType);
ContentType.parse(nextRecord.contentType),
nextRecord.body);
} catch (Exception ex) { } catch (Exception ex) {
logger.error("Failed to convert body to string", ex); logger.error("Failed to convert body to string", ex);
status = CrawlerDocumentStatus.BAD_CHARSET; status = CrawlerDocumentStatus.BAD_CHARSET;
@@ -147,7 +145,7 @@ public class ParquetSerializableCrawlDataStream implements AutoCloseable, Serial
status.toString(), status.toString(),
"", "",
nextRecord.headers, nextRecord.headers,
bodyString, nextRecord.body,
// this field isn't actually used, maybe we can skip calculating it? // this field isn't actually used, maybe we can skip calculating it?
nextRecord.cookies, nextRecord.cookies,
lastModified, lastModified,

View File

@@ -0,0 +1,181 @@
package nu.marginalia.io.crawldata.format;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.crawldata.*;
import nu.marginalia.slop.SlopCrawlDataRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URISyntaxException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Deque;
import java.util.NoSuchElementException;
public class SlopSerializableCrawlDataStream implements AutoCloseable, SerializableCrawlDataStream {
private static final Logger logger = LoggerFactory.getLogger(SlopSerializableCrawlDataStream.class);
private final SlopCrawlDataRecord.FilteringReader reader;
// Holds the next value. This is not a buffer, but to deal with the fact that
// we sometimes generate multiple SerializableCrawlData records for a single input
private final Deque<SerializableCrawlData> nextQ = new ArrayDeque<>();
private boolean wroteDomainRecord = false;
private final Path path;
public SlopSerializableCrawlDataStream(Path file) throws IOException {
path = file;
reader = new SlopCrawlDataRecord.FilteringReader(file) {
@Override
public boolean filter(String url, int status, String contentType) {
String ctLc = contentType.toLowerCase();
if (ctLc.startsWith("text/"))
return true;
else if (ctLc.startsWith("x-marginalia/"))
return true;
return false;
}
};
}
@Override
public Path path() {
return path;
}
public static int sizeHint(Path path) {
// Only calculate size hint for large files
// (the reason we calculate them in the first place is to assess whether it is large
// because it has many documents, or because it is a small number of large documents)
try {
if (Files.size(path) > 10_000_000) {
return SlopCrawlDataRecord.countGoodStatusCodes(path);
}
} catch (IOException e) {
// suppressed
}
return 0;
}
@Override
public boolean hasNext() {
try {
while (reader.hasRemaining() && nextQ.isEmpty()) {
try {
var nextRecord = reader.get();
if (!wroteDomainRecord) {
createDomainRecord(nextRecord);
wroteDomainRecord = true;
}
createDocumentRecord(nextRecord);
} catch (Exception ex) {
logger.error("Failed to create document record", ex);
}
}
return !nextQ.isEmpty();
}
catch (IOException ex) {
return false;
}
}
private void createDomainRecord(SlopCrawlDataRecord parquetRecord) throws URISyntaxException {
CrawlerDomainStatus status = CrawlerDomainStatus.OK;
String statusReason = "";
String redirectDomain = null;
// The advisory content types are used to signal various states of the crawl
// that are not actual crawled documents.
switch (parquetRecord.contentType()) {
case "x-marginalia/advisory;state=redirect" -> {
EdgeUrl crawledUrl = new EdgeUrl(parquetRecord.url());
redirectDomain = crawledUrl.getDomain().toString();
status = CrawlerDomainStatus.REDIRECT;
}
case "x-marginalia/advisory;state=blocked" -> {
status = CrawlerDomainStatus.BLOCKED;
}
case "x-marginalia/advisory;state=error" -> {
status = CrawlerDomainStatus.ERROR;
statusReason = new String(parquetRecord.body());
}
}
nextQ.add(new CrawledDomain(
parquetRecord.domain(),
redirectDomain,
status.toString(),
statusReason,
parquetRecord.ip(),
new ArrayList<>(),
new ArrayList<>()
));
}
private void createDocumentRecord(SlopCrawlDataRecord nextRecord) {
CrawlerDocumentStatus status = CrawlerDocumentStatus.OK;
if (nextRecord.contentType().startsWith("x-marginalia/advisory;state=content-type-failed-probe")) {
status = CrawlerDocumentStatus.BAD_CONTENT_TYPE;
}
else if (nextRecord.contentType().startsWith("x-marginalia/advisory;state=robots-txt-skipped")) {
status = CrawlerDocumentStatus.ROBOTS_TXT;
}
else if (nextRecord.contentType().startsWith("x-marginalia/advisory")) {
// we don't care about the other advisory content types here
return;
}
else if (nextRecord.body() != null) {
try {
ContentType.parse(nextRecord.contentType());
} catch (Exception ex) {
logger.error("Failed to convert body to string", ex);
status = CrawlerDocumentStatus.BAD_CHARSET;
}
}
else {
status = CrawlerDocumentStatus.ERROR;
}
nextQ.add(new CrawledDocument("",
nextRecord.url(),
nextRecord.contentType(),
Instant.ofEpochMilli(nextRecord.timestamp()).toString(),
nextRecord.httpStatus(),
status.toString(),
"",
nextRecord.headers(),
nextRecord.body(),
// this field isn't actually used, maybe we can skip calculating it?
nextRecord.cookies(),
null,
null));
}
public void close() throws IOException {
reader.close();
}
@Override
public SerializableCrawlData next() throws IOException {
if (!hasNext())
throw new NoSuchElementException();
return nextQ.poll();
}
}

View File

@@ -18,7 +18,7 @@ public class DocumentBodyExtractor {
return asBytes(fetchOk); return asBytes(fetchOk);
} }
else if (result instanceof HttpFetchResult.Result304ReplacedWithReference retained) { else if (result instanceof HttpFetchResult.Result304ReplacedWithReference retained) {
return new DocumentBodyResult.Ok<>(retained.contentType(), retained.body().getBytes()); return new DocumentBodyResult.Ok<>(retained.contentType(), retained.body());
} }
return new DocumentBodyResult.Error<>(CrawlerDocumentStatus.ERROR, "Fetch Result Not Ok"); return new DocumentBodyResult.Error<>(CrawlerDocumentStatus.ERROR, "Fetch Result Not Ok");

View File

@@ -1,17 +1,18 @@
package nu.marginalia.model.body; package nu.marginalia.model.body;
import nu.marginalia.contenttype.ContentType; import nu.marginalia.contenttype.ContentType;
import okhttp3.Headers; import org.jetbrains.annotations.Nullable;
import org.jsoup.Jsoup; import org.jsoup.Jsoup;
import org.jsoup.nodes.Document; import org.jsoup.nodes.Document;
import org.netpreserve.jwarc.MessageHeaders; import org.netpreserve.jwarc.MessageHeaders;
import org.netpreserve.jwarc.WarcResponse; import org.netpreserve.jwarc.WarcResponse;
import java.io.ByteArrayInputStream; import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream; import java.io.InputStream;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.URI; import java.net.URI;
import java.net.http.HttpHeaders;
import java.util.Arrays;
import java.util.Optional; import java.util.Optional;
/* FIXME: This interface has a very unfortunate name that is not very descriptive. /* FIXME: This interface has a very unfortunate name that is not very descriptive.
@@ -56,42 +57,32 @@ public sealed interface HttpFetchResult {
*/ */
record ResultOk(URI uri, record ResultOk(URI uri,
int statusCode, int statusCode,
Headers headers, HttpHeaders headers,
String ipAddress, String ipAddress,
byte[] bytesRaw, byte[] bytesRaw, // raw data for the entire response including headers
int bytesStart, int bytesStart,
int bytesLength int bytesLength
) implements HttpFetchResult { ) implements HttpFetchResult {
public ResultOk(URI uri, int status, MessageHeaders headers, String ipAddress, byte[] bytes, int bytesStart, int length) {
this(uri, status, HttpHeaders.of(headers.map(), (k,v) -> true), ipAddress, bytes, bytesStart, length);
}
public boolean isOk() { public boolean isOk() {
return statusCode >= 200 && statusCode < 300; return statusCode >= 200 && statusCode < 300;
} }
public ResultOk(URI uri,
int statusCode,
MessageHeaders headers,
String ipAddress,
byte[] bytesRaw,
int bytesStart,
int bytesLength) {
this(uri, statusCode, convertHeaders(headers), ipAddress, bytesRaw, bytesStart, bytesLength);
}
private static Headers convertHeaders(MessageHeaders headers) {
var ret = new Headers.Builder();
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
ret.add(header.getKey(), value);
}
}
return ret.build();
}
public InputStream getInputStream() { public InputStream getInputStream() {
return new ByteArrayInputStream(bytesRaw, bytesStart, bytesLength); return new ByteArrayInputStream(bytesRaw, bytesStart, bytesLength);
} }
public Optional<Document> parseDocument() throws IOException { /** Copy the byte range corresponding to the payload of the response,
Warning: Copies the data, use getInputStream() for zero copy access */
public byte[] getBodyBytes() {
return Arrays.copyOfRange(bytesRaw, bytesStart, bytesStart + bytesLength);
}
public Optional<Document> parseDocument() {
return DocumentBodyExtractor.asString(this).flatMapOpt((contentType, body) -> { return DocumentBodyExtractor.asString(this).flatMapOpt((contentType, body) -> {
if (contentType.is("text/html")) { if (contentType.is("text/html")) {
return Optional.of(Jsoup.parse(body)); return Optional.of(Jsoup.parse(body));
@@ -102,8 +93,9 @@ public sealed interface HttpFetchResult {
}); });
} }
@Nullable
public String header(String name) { public String header(String name) {
return headers.get(name); return headers.firstValue(name).orElse(null);
} }
} }
@@ -114,20 +106,10 @@ public sealed interface HttpFetchResult {
* *
* @see Result304Raw for the case where the document has not yet been replaced with the reference data. * @see Result304Raw for the case where the document has not yet been replaced with the reference data.
*/ */
record Result304ReplacedWithReference(String url, ContentType contentType, String body) implements HttpFetchResult { record Result304ReplacedWithReference(String url, ContentType contentType, byte[] body) implements HttpFetchResult {
public boolean isOk() { public boolean isOk() {
return true; return true;
} }
public Optional<Document> parseDocument() {
try {
return Optional.of(Jsoup.parse(body));
}
catch (Exception ex) {
return Optional.empty();
}
}
} }
/** Fetching resulted in an exception */ /** Fetching resulted in an exception */

View File

@@ -1,8 +1,16 @@
package nu.marginalia.model.crawldata; package nu.marginalia.model.crawldata;
import nu.marginalia.contenttype.ContentType;
import nu.marginalia.contenttype.DocumentBodyToString;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.jetbrains.annotations.Nullable; import org.jetbrains.annotations.Nullable;
import org.jsoup.nodes.Document;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Objects;
public final class CrawledDocument implements SerializableCrawlData { public final class CrawledDocument implements SerializableCrawlData {
public String crawlId; public String crawlId;
@@ -19,8 +27,52 @@ public final class CrawledDocument implements SerializableCrawlData {
@Nullable @Nullable
public String headers; public String headers;
public String documentBody; public String documentBody() {
return DocumentBodyToString.getStringData(
ContentType.parse(contentType),
documentBodyBytes);
}
/** Attempt to parse the first sampleSize bytes of the document body into a string */
public String documentBody(int sampleSize) {
if (sampleSize >= documentBodyBytes.length) {
return documentBody();
}
// Truncating the string at an unlucky point *may* lead to a parsing error
// ... so we try again with a longer length
for (int i = 0; i <= 3 && sampleSize + i < documentBodyBytes.length; i++) {
try {
byte[] bytes = new byte[sampleSize + i];
System.arraycopy(documentBodyBytes, 0, bytes, 0, bytes.length);
return DocumentBodyToString.getStringData(
ContentType.parse(contentType),
bytes);
}
catch (RuntimeException ex) {
// Try again with i + 1
}
}
throw new IllegalArgumentException("Failed to parse substring");
}
public Document parseBody() throws IOException {
// Prevent stalls from parsing excessively large documents
return DocumentBodyToString.getParsedData(
ContentType.parse(contentType),
documentBodyBytes,
200_000,
url);
}
public boolean hasBody() {
return documentBodyBytes.length > 0;
}
public byte[] documentBodyBytes;
/** /**
* This is not guaranteed to be set in all versions of the format, * This is not guaranteed to be set in all versions of the format,
* information may come in CrawledDomain instead * information may come in CrawledDomain instead
@@ -30,7 +82,7 @@ public final class CrawledDocument implements SerializableCrawlData {
public String lastModifiedMaybe; public String lastModifiedMaybe;
public String etagMaybe; public String etagMaybe;
public CrawledDocument(String crawlId, String url, String contentType, String timestamp, int httpStatus, String crawlerStatus, String crawlerStatusDesc, @Nullable String headers, String documentBody, Boolean hasCookies, String lastModifiedMaybe, String etagMaybe) { public CrawledDocument(String crawlId, String url, String contentType, String timestamp, int httpStatus, String crawlerStatus, String crawlerStatusDesc, @Nullable String headers, byte[] documentBodyBytes, Boolean hasCookies, String lastModifiedMaybe, String etagMaybe) {
this.crawlId = crawlId; this.crawlId = crawlId;
this.url = url; this.url = url;
this.contentType = contentType; this.contentType = contentType;
@@ -39,7 +91,7 @@ public final class CrawledDocument implements SerializableCrawlData {
this.crawlerStatus = crawlerStatus; this.crawlerStatus = crawlerStatus;
this.crawlerStatusDesc = crawlerStatusDesc; this.crawlerStatusDesc = crawlerStatusDesc;
this.headers = headers; this.headers = headers;
this.documentBody = documentBody; this.documentBodyBytes = Objects.requireNonNullElse(documentBodyBytes, new byte[] {});
this.hasCookies = hasCookies; this.hasCookies = hasCookies;
this.lastModifiedMaybe = lastModifiedMaybe; this.lastModifiedMaybe = lastModifiedMaybe;
this.etagMaybe = etagMaybe; this.etagMaybe = etagMaybe;
@@ -106,7 +158,7 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
public String toString() { public String toString() {
return "CrawledDocument(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + this.documentBody + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")"; return "CrawledDocument(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + documentBody() + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
} }
public static class CrawledDocumentBuilder { public static class CrawledDocumentBuilder {
@@ -118,7 +170,7 @@ public final class CrawledDocument implements SerializableCrawlData {
private String crawlerStatus; private String crawlerStatus;
private String crawlerStatusDesc; private String crawlerStatusDesc;
private @Nullable String headers; private @Nullable String headers;
private String documentBody; private byte[] documentBodyBytes = new byte[0];
private String recrawlState; private String recrawlState;
private Boolean hasCookies; private Boolean hasCookies;
private String lastModifiedMaybe; private String lastModifiedMaybe;
@@ -168,10 +220,13 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
public CrawledDocumentBuilder documentBody(String documentBody) { public CrawledDocumentBuilder documentBody(String documentBody) {
this.documentBody = documentBody; this.documentBodyBytes = documentBody.getBytes(StandardCharsets.UTF_8);
return this;
}
public CrawledDocumentBuilder documentBodyBytes(byte[] documentBodyBytes) {
this.documentBodyBytes = documentBodyBytes;
return this; return this;
} }
@Deprecated @Deprecated
public CrawledDocumentBuilder recrawlState(String recrawlState) { public CrawledDocumentBuilder recrawlState(String recrawlState) {
this.recrawlState = recrawlState; this.recrawlState = recrawlState;
@@ -194,11 +249,11 @@ public final class CrawledDocument implements SerializableCrawlData {
} }
public CrawledDocument build() { public CrawledDocument build() {
return new CrawledDocument(this.crawlId, this.url, this.contentType, this.timestamp, this.httpStatus, this.crawlerStatus, this.crawlerStatusDesc, this.headers, this.documentBody, this.hasCookies, this.lastModifiedMaybe, this.etagMaybe); return new CrawledDocument(this.crawlId, this.url, this.contentType, this.timestamp, this.httpStatus, this.crawlerStatus, this.crawlerStatusDesc, this.headers, this.documentBodyBytes, this.hasCookies, this.lastModifiedMaybe, this.etagMaybe);
} }
public String toString() { public String toString() {
return "CrawledDocument.CrawledDocumentBuilder(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBody=" + this.documentBody + ", recrawlState=" + this.recrawlState + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")"; return "CrawledDocument.CrawledDocumentBuilder(crawlId=" + this.crawlId + ", url=" + this.url + ", contentType=" + this.contentType + ", timestamp=" + this.timestamp + ", httpStatus=" + this.httpStatus + ", crawlerStatus=" + this.crawlerStatus + ", crawlerStatusDesc=" + this.crawlerStatusDesc + ", headers=" + this.headers + ", documentBodyBytes=" + Arrays.toString(this.documentBodyBytes) + ", recrawlState=" + this.recrawlState + ", hasCookies=" + this.hasCookies + ", lastModifiedMaybe=" + this.lastModifiedMaybe + ", etagMaybe=" + this.etagMaybe + ")";
} }
} }
} }

View File

@@ -165,27 +165,28 @@ public class CrawledDocumentParquetRecordFileWriter implements AutoCloseable {
contentType = ""; contentType = "";
} }
String headersStr = null;
StringJoiner headersStrBuilder = new StringJoiner("\n"); StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers) { for (var header : headers.map().entrySet()) {
headersStrBuilder.add(header.getFirst() + ": " + header.getSecond()); for (var value : header.getValue()) {
headersStrBuilder.add(header.getKey() + ": " + value);
}
} }
headersStr = headersStrBuilder.toString(); String headersStr = headersStrBuilder.toString();
write(new CrawledDocumentParquetRecord( write(new CrawledDocumentParquetRecord(
domain, domain,
response.target(), response.target(),
fetchOk.ipAddress(), fetchOk.ipAddress(),
WarcXCookieInformationHeader.hasCookies(response), headers.firstValue("X-Has-Cookies").orElse("0").equals("1"),
fetchOk.statusCode(), fetchOk.statusCode(),
response.date(), response.date(),
contentType, contentType,
bodyBytes, bodyBytes,
headersStr, headersStr,
headers.get("ETag"), headers.firstValue("ETag").orElse(null),
headers.get("Last-Modified")) headers.firstValue("Last-Modified").orElse(null)
); ));
} }

View File

@@ -0,0 +1,522 @@
package nu.marginalia.slop;
import nu.marginalia.ContentTypes;
import nu.marginalia.UserAgent;
import nu.marginalia.model.body.DocumentBodyExtractor;
import nu.marginalia.model.body.DocumentBodyResult;
import nu.marginalia.model.body.HttpFetchResult;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecord;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileReader;
import nu.marginalia.slop.column.array.ByteArrayColumn;
import nu.marginalia.slop.column.primitive.ByteColumn;
import nu.marginalia.slop.column.primitive.LongColumn;
import nu.marginalia.slop.column.primitive.ShortColumn;
import nu.marginalia.slop.column.string.EnumColumn;
import nu.marginalia.slop.column.string.StringColumn;
import nu.marginalia.slop.desc.StorageType;
import nu.marginalia.slop.storage.LargeItem;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.StringUtils;
import org.netpreserve.jwarc.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.URI;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Instant;
import java.util.List;
import java.util.Objects;
import java.util.StringJoiner;
public record SlopCrawlDataRecord(String domain,
String url,
String ip,
boolean cookies,
int httpStatus,
long timestamp,
String contentType,
byte[] body,
String headers)
{
private static final EnumColumn domainColumn = new EnumColumn("domain", StandardCharsets.UTF_8, StorageType.ZSTD);
private static final StringColumn urlColumn = new StringColumn("url", StandardCharsets.UTF_8, StorageType.ZSTD);
private static final StringColumn ipColumn = new StringColumn("ip", StandardCharsets.ISO_8859_1, StorageType.ZSTD);
private static final ByteColumn cookiesColumn = new ByteColumn("cookies");
private static final ShortColumn statusColumn = new ShortColumn("httpStatus");
private static final LongColumn timestampColumn = new LongColumn("timestamp");
private static final EnumColumn contentTypeColumn = new EnumColumn("contentType", StandardCharsets.UTF_8);
private static final ByteArrayColumn bodyColumn = new ByteArrayColumn("body", StorageType.ZSTD);
private static final StringColumn headerColumn = new StringColumn("header", StandardCharsets.UTF_8, StorageType.ZSTD);
public SlopCrawlDataRecord(CrawledDocumentParquetRecord parquetRecord) {
this(parquetRecord.domain,
parquetRecord.url,
parquetRecord.ip,
parquetRecord.cookies,
parquetRecord.httpStatus,
parquetRecord.timestamp.toEpochMilli(),
parquetRecord.contentType,
parquetRecord.body,
parquetRecord.headers
);
}
private static SlopCrawlDataRecord forDomainRedirect(String domain, Instant date, String redirectDomain) {
return new SlopCrawlDataRecord(domain,
"https://" + redirectDomain + "/",
"",
false,
0,
date.toEpochMilli(),
"x-marginalia/advisory;state=redirect",
new byte[0],
""
);
}
private static SlopCrawlDataRecord forDomainError(String domain, Instant date, String ip, String errorStatus) {
return new SlopCrawlDataRecord(domain,
"https://" + domain + "/",
ip,
false,
0,
date.toEpochMilli(),
"x-marginalia/advisory;state=error",
errorStatus.getBytes(),
""
);
}
private static SlopCrawlDataRecord forDocError(String domain, Instant date, String url, String errorStatus) {
return new SlopCrawlDataRecord(domain,
url,
"",
false,
0,
date.toEpochMilli(),
errorStatus,
new byte[0],
""
);
}
public static void convertFromParquet(Path parquetInput, Path slopOutput) throws IOException {
Path tempDir = Files.createTempDirectory(slopOutput.getParent(), "conversion");
try (var writer = new Writer(tempDir);
var stream = CrawledDocumentParquetRecordFileReader.stream(parquetInput))
{
stream.forEach(
parquetRecord -> {
try {
writer.write(new SlopCrawlDataRecord(parquetRecord));
} catch (IOException e) {
throw new RuntimeException(e);
}
});
}
catch (IOException ex) {
FileUtils.deleteDirectory(tempDir.toFile());
throw ex;
}
try {
SlopTablePacker.packToSlopZip(tempDir, slopOutput);
FileUtils.deleteDirectory(tempDir.toFile());
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
}
private static final Logger logger = LoggerFactory.getLogger(SlopCrawlDataRecord.class);
public static void convertWarc(String domain,
UserAgent userAgent,
Path warcInputFile,
Path slopOutputFile) throws IOException {
Path tempDir = Files.createTempDirectory(slopOutputFile.getParent(), "slop-"+domain);
try (var warcReader = new WarcReader(warcInputFile);
var slopWriter = new SlopCrawlDataRecord.Writer(tempDir)
) {
WarcXResponseReference.register(warcReader);
WarcXEntityRefused.register(warcReader);
String uaString = userAgent.uaString();
for (var record : warcReader) {
try {
if (record instanceof WarcResponse response) {
// this also captures WarcXResponseReference, which inherits from WarcResponse
// and is used to store old responses from previous crawls; in this part of the logic
// we treat them the same as a normal response
if (!filterResponse(uaString, response)) {
continue;
}
slopWriter.write(domain, response);
} else if (record instanceof WarcXEntityRefused refused) {
slopWriter.write(domain, refused);
} else if (record instanceof Warcinfo warcinfo) {
slopWriter.write(warcinfo);
}
}
catch (Exception ex) {
logger.error("Failed to convert WARC record to Parquet", ex);
}
}
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
try {
SlopTablePacker.packToSlopZip(tempDir, slopOutputFile);
FileUtils.deleteDirectory(tempDir.toFile());
}
catch (Exception ex) {
logger.error("Failed to convert WARC file to Parquet", ex);
}
}
/** Return true if the WarcResponse should be excluded from conversion */
private static boolean filterResponse(String uaString, WarcResponse response) throws IOException {
// We don't want to store robots.txt files, as they are not
// interesting for the analysis we want to do. This is important
// since txt-files in general are interesting, and we don't want to
// exclude them as a class.
if (response.targetURI().getPath().equals("/robots.txt")) {
return false;
}
var headers = response.http().headers();
var robotsTags = headers.all("X-Robots-Tag");
if (!isXRobotsTagsPermitted(robotsTags, uaString)) {
return false;
}
// Strip out responses with content types we aren't interested in
// (though ideally we wouldn't download these at all)
String contentType = headers.first("Content-Type").orElse("text/plain").toLowerCase();
if (!ContentTypes.isAccepted(contentType)) {
return false;
}
return true;
}
/** Check X-Robots-Tag header tag to see if we are allowed to index this page.
* <p>
* Reference: <a href="https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag">https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag</a>
*
* @param xRobotsHeaderTags List of X-Robots-Tag values
* @param userAgent User agent string
* @return true if we are allowed to index this page
*/
// Visible for tests
public static boolean isXRobotsTagsPermitted(List<String> xRobotsHeaderTags, String userAgent) {
boolean isPermittedGeneral = true;
boolean isPermittedMarginalia = false;
boolean isForbiddenMarginalia = false;
for (String header : xRobotsHeaderTags) {
if (header.indexOf(':') >= 0) {
String[] parts = StringUtils.split(header, ":", 2);
if (parts.length < 2)
continue;
// Is this relevant to us?
if (!Objects.equals(parts[0].trim(), userAgent))
continue;
if (parts[1].contains("noindex"))
isForbiddenMarginalia = true;
else if (parts[1].contains("none"))
isForbiddenMarginalia = true;
else if (parts[1].contains("all"))
isPermittedMarginalia = true;
}
else {
if (header.contains("noindex"))
isPermittedGeneral = false;
if (header.contains("none"))
isPermittedGeneral = false;
}
}
if (isPermittedMarginalia)
return true;
if (isForbiddenMarginalia)
return false;
return isPermittedGeneral;
}
public static int countGoodStatusCodes(Path path) throws IOException {
int cnt = 0;
try (var table = new SlopTable(path)) {
ShortColumn.Reader statusReader = statusColumn.open(table);
while (statusReader.hasRemaining()) {
if (statusReader.get() == 200) {
cnt++;
}
}
}
return cnt;
}
public static class Writer extends SlopTable {
private final EnumColumn.Writer domainColumnWriter;
private final StringColumn.Writer urlColumnWriter;
private final StringColumn.Writer ipColumnWriter;
private final ByteColumn.Writer cookiesColumnWriter;
private final ShortColumn.Writer statusColumnWriter;
private final LongColumn.Writer timestampColumnWriter;
private final EnumColumn.Writer contentTypeColumnWriter;
private final ByteArrayColumn.Writer bodyColumnWriter;
private final StringColumn.Writer headerColumnWriter;
public Writer(Path path) throws IOException {
super(path);
domainColumnWriter = domainColumn.create(this);
urlColumnWriter = urlColumn.create(this);
ipColumnWriter = ipColumn.create(this);
cookiesColumnWriter = cookiesColumn.create(this);
statusColumnWriter = statusColumn.create(this);
timestampColumnWriter = timestampColumn.create(this);
contentTypeColumnWriter = contentTypeColumn.create(this);
bodyColumnWriter = bodyColumn.create(this);
headerColumnWriter = headerColumn.create(this);
}
public void write(SlopCrawlDataRecord record) throws IOException {
domainColumnWriter.put(record.domain);
urlColumnWriter.put(record.url);
ipColumnWriter.put(record.ip);
cookiesColumnWriter.put(record.cookies ? (byte) 1 : (byte) 0);
statusColumnWriter.put((short) record.httpStatus);
timestampColumnWriter.put(record.timestamp);
contentTypeColumnWriter.put(record.contentType);
bodyColumnWriter.put(record.body);
headerColumnWriter.put(record.headers);
}
public void write(String domain, WarcResponse response) throws IOException {
HttpFetchResult result = HttpFetchResult.importWarc(response);
if (!(result instanceof HttpFetchResult.ResultOk fetchOk)) {
return;
}
byte[] bodyBytes;
String contentType;
var body = DocumentBodyExtractor.asBytes(result);
var headers = fetchOk.headers();
if (body instanceof DocumentBodyResult.Ok<byte[]> bodyOk) {
bodyBytes = bodyOk.body();
contentType = bodyOk.contentType().toString();
}
else {
bodyBytes = new byte[0];
contentType = "";
}
String headersStr;
StringJoiner headersStrBuilder = new StringJoiner("\n");
for (var header : headers.map().entrySet()) {
for (var value : header.getValue()) {
headersStrBuilder.add(header.getKey() + ": " + value);
}
}
headersStr = headersStrBuilder.toString();
write(new SlopCrawlDataRecord(
domain,
response.target(),
fetchOk.ipAddress(),
"1".equals(headers.firstValue("X-Cookies").orElse("0")),
fetchOk.statusCode(),
response.date().toEpochMilli(),
contentType,
bodyBytes,
headersStr
)
);
}
private void write(String domain, WarcXEntityRefused refused) throws IOException {
URI profile = refused.profile();
String meta;
if (profile.equals(WarcXEntityRefused.documentRobotsTxtSkippedURN)) {
meta = "x-marginalia/advisory;state=robots-txt-skipped";
}
else if (profile.equals(WarcXEntityRefused.documentBadContentTypeURN)) {
meta = "x-marginalia/advisory;state=content-type-failed-probe";
}
else if (profile.equals(WarcXEntityRefused.documentProbeTimeout)) {
meta = "x-marginalia/advisory;state=timeout-probe";
}
else if (profile.equals(WarcXEntityRefused.documentUnspecifiedError)) {
meta = "x-marginalia/advisory;state=doc-error";
}
else {
meta = "x-marginalia/advisory;state=unknown";
}
write(forDocError(domain, refused.date(), refused.target(), meta));
}
private void write(Warcinfo warcinfo) throws IOException {
String selfDomain = warcinfo.fields().first("domain").orElse("");
String ip = warcinfo.fields().first("ip").orElse("");
String probeStatus = warcinfo.fields().first("X-WARC-Probe-Status").orElse("");
if (probeStatus.startsWith("REDIRECT")) {
String redirectDomain = probeStatus.substring("REDIRECT;".length());
write(forDomainRedirect(selfDomain, warcinfo.date(), redirectDomain));
}
else if (!"OK".equals(probeStatus)) {
write(forDomainError(selfDomain, warcinfo.date(), ip, probeStatus));
}
}
}
public static class Reader extends SlopTable {
private final EnumColumn.Reader domainColumnReader;
private final StringColumn.Reader urlColumnReader;
private final StringColumn.Reader ipColumnReader;
private final ByteColumn.Reader cookiesColumnReader;
private final ShortColumn.Reader statusColumnReader;
private final LongColumn.Reader timestampColumnReader;
private final EnumColumn.Reader contentTypeColumnReader;
private final ByteArrayColumn.Reader bodyColumnReader;
private final StringColumn.Reader headerColumnReader;
public Reader(Path path) throws IOException {
super(path);
domainColumnReader = domainColumn.open(this);
urlColumnReader = urlColumn.open(this);
ipColumnReader = ipColumn.open(this);
cookiesColumnReader = cookiesColumn.open(this);
statusColumnReader = statusColumn.open(this);
timestampColumnReader = timestampColumn.open(this);
contentTypeColumnReader = contentTypeColumn.open(this);
bodyColumnReader = bodyColumn.open(this);
headerColumnReader = headerColumn.open(this);
}
public SlopCrawlDataRecord get() throws IOException {
return new SlopCrawlDataRecord(
domainColumnReader.get(),
urlColumnReader.get(),
ipColumnReader.get(),
cookiesColumnReader.get() == 1,
statusColumnReader.get(),
timestampColumnReader.get(),
contentTypeColumnReader.get(),
bodyColumnReader.get(),
headerColumnReader.get()
);
}
public boolean hasRemaining() throws IOException {
return domainColumnReader.hasRemaining();
}
}
public abstract static class FilteringReader extends SlopTable {
private final EnumColumn.Reader domainColumnReader;
private final StringColumn.Reader urlColumnReader;
private final StringColumn.Reader ipColumnReader;
private final ByteColumn.Reader cookiesColumnReader;
private final ShortColumn.Reader statusColumnReader;
private final LongColumn.Reader timestampColumnReader;
private final EnumColumn.Reader contentTypeColumnReader;
private final ByteArrayColumn.Reader bodyColumnReader;
private final StringColumn.Reader headerColumnReader;
private SlopCrawlDataRecord next = null;
public FilteringReader(Path path) throws IOException {
super(path);
domainColumnReader = domainColumn.open(this);
urlColumnReader = urlColumn.open(this);
ipColumnReader = ipColumn.open(this);
cookiesColumnReader = cookiesColumn.open(this);
statusColumnReader = statusColumn.open(this);
timestampColumnReader = timestampColumn.open(this);
contentTypeColumnReader = contentTypeColumn.open(this);
bodyColumnReader = bodyColumn.open(this);
headerColumnReader = headerColumn.open(this);
}
public abstract boolean filter(String url, int status, String contentType);
public SlopCrawlDataRecord get() throws IOException {
if (next == null) {
if (!hasRemaining()) {
throw new IllegalStateException("No more values remaining");
}
}
var val = next;
next = null;
return val;
}
public boolean hasRemaining() throws IOException {
if (next != null)
return true;
while (domainColumnReader.hasRemaining()) {
String domain = domainColumnReader.get();
String url = urlColumnReader.get();
String ip = ipColumnReader.get();
boolean cookies = cookiesColumnReader.get() == 1;
int status = statusColumnReader.get();
long timestamp = timestampColumnReader.get();
String contentType = contentTypeColumnReader.get();
LargeItem<byte[]> body = bodyColumnReader.getLarge();
LargeItem<String> headers = headerColumnReader.getLarge();
if (filter(url, status, contentType)) {
next = new SlopCrawlDataRecord(
domain, url, ip, cookies, status, timestamp, contentType, body.get(), headers.get()
);
return true;
}
else {
body.close();
headers.close();
}
}
return false;
}
}
}

View File

@@ -1,35 +0,0 @@
package org.netpreserve.jwarc;
import okhttp3.HttpUrl;
import okhttp3.OkHttpClient;
/** Encapsulates out-of-band information about whether a website uses cookies,
* using a non-standard WARC header "X-Has-Cookies".
*/
public class WarcXCookieInformationHeader {
private boolean hasCookies = false;
private static final String headerName = "X-Has-Cookies";
public void update(OkHttpClient client, HttpUrl url) {
if (!hasCookies) {
hasCookies = !client.cookieJar().loadForRequest(url).isEmpty();
}
}
public boolean hasCookies() {
return hasCookies;
}
public void paint(WarcResponse.Builder builder) {
builder.addHeader(headerName, hasCookies ? "1" : "0");
}
public void paint(WarcXResponseReference.Builder builder) {
builder.addHeader(headerName, hasCookies ? "1" : "0");
}
public static boolean hasCookies(WarcRecord record) {
return record.headers().contains(headerName, "1");
}
}

View File

@@ -80,7 +80,7 @@ class CrawledDocumentParquetRecordFileWriterTest {
var document = (CrawledDocument) secondItem; var document = (CrawledDocument) secondItem;
assertEquals("https://www.marginalia.nu/", document.url); assertEquals("https://www.marginalia.nu/", document.url);
assertEquals("text/html", document.contentType); assertEquals("text/html", document.contentType);
assertEquals("hello world", document.documentBody); assertEquals("hello world", document.documentBody());
assertEquals(200, document.httpStatus); assertEquals(200, document.httpStatus);
} }
@@ -103,7 +103,7 @@ class CrawledDocumentParquetRecordFileWriterTest {
System.out.println(doc.url); System.out.println(doc.url);
System.out.println(doc.contentType); System.out.println(doc.contentType);
System.out.println(doc.httpStatus); System.out.println(doc.httpStatus);
System.out.println(doc.documentBody.length()); System.out.println(doc.documentBody().length());
} }
} }
} catch (IOException e) { } catch (IOException e) {

View File

@@ -10,7 +10,7 @@ import java.nio.file.Path;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.Instant; import java.time.Instant;
import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.*;
class DomainStateDbTest { class DomainStateDbTest {
@@ -26,7 +26,7 @@ class DomainStateDbTest {
} }
@Test @Test
public void testSunnyDay() throws SQLException { public void testSummaryRecord() throws SQLException {
try (var db = new DomainStateDb(tempFile)) { try (var db = new DomainStateDb(tempFile)) {
var allFields = new DomainStateDb.SummaryRecord( var allFields = new DomainStateDb.SummaryRecord(
"all.marginalia.nu", "all.marginalia.nu",
@@ -63,4 +63,21 @@ class DomainStateDbTest {
} }
} }
@Test
public void testFavicon() throws SQLException {
try (var db = new DomainStateDb(tempFile)) {
db.saveIcon("www.marginalia.nu", new DomainStateDb.FaviconRecord("text/plain", "hello world".getBytes()));
var maybeData = db.getIcon("www.marginalia.nu");
assertTrue(maybeData.isPresent());
var actualData = maybeData.get();
assertEquals("text/plain", actualData.contentType());
assertArrayEquals("hello world".getBytes(), actualData.imageData());
maybeData = db.getIcon("foobar");
assertTrue(maybeData.isEmpty());
}
}
} }

View File

@@ -1,11 +1,9 @@
package nu.marginalia.crawl.retreival; package nu.marginalia.crawl.retreival;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor; import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import org.junit.jupiter.api.AfterEach; import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
@@ -15,6 +13,8 @@ import org.netpreserve.jwarc.WarcResponse;
import java.io.IOException; import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
@@ -27,11 +27,10 @@ import static org.junit.jupiter.api.Assertions.fail;
class CrawlerWarcResynchronizerTest { class CrawlerWarcResynchronizerTest {
Path fileName; Path fileName;
Path outputFile; Path outputFile;
OkHttpClient httpClient; HttpClient httpClient;
@BeforeEach @BeforeEach
public void setUp() throws Exception { public void setUp() throws Exception {
httpClient = new OkHttpClient.Builder() httpClient = HttpClient.newBuilder()
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor())
.build(); .build();
fileName = Files.createTempFile("test", ".warc.gz"); fileName = Files.createTempFile("test", ".warc.gz");
@@ -46,7 +45,7 @@ class CrawlerWarcResynchronizerTest {
@Test @Test
void run() throws IOException, URISyntaxException { void run() throws IOException, URISyntaxException {
try (var oldRecorder = new WarcRecorder(fileName)) { try (var oldRecorder = new WarcRecorder(fileName, new Cookies())) {
fetchUrl(oldRecorder, "https://www.marginalia.nu/"); fetchUrl(oldRecorder, "https://www.marginalia.nu/");
fetchUrl(oldRecorder, "https://www.marginalia.nu/log/"); fetchUrl(oldRecorder, "https://www.marginalia.nu/log/");
fetchUrl(oldRecorder, "https://www.marginalia.nu/feed/"); fetchUrl(oldRecorder, "https://www.marginalia.nu/feed/");
@@ -56,7 +55,7 @@ class CrawlerWarcResynchronizerTest {
var crawlFrontier = new DomainCrawlFrontier(new EdgeDomain("www.marginalia.nu"), List.of(), 100); var crawlFrontier = new DomainCrawlFrontier(new EdgeDomain("www.marginalia.nu"), List.of(), 100);
try (var newRecorder = new WarcRecorder(outputFile)) { try (var newRecorder = new WarcRecorder(outputFile, new Cookies())) {
new CrawlerWarcResynchronizer(crawlFrontier, newRecorder).run(fileName); new CrawlerWarcResynchronizer(crawlFrontier, newRecorder).run(fileName);
} }
@@ -79,10 +78,11 @@ class CrawlerWarcResynchronizerTest {
} }
void fetchUrl(WarcRecorder recorder, String url) throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException { void fetchUrl(WarcRecorder recorder, String url) throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
var req = new Request.Builder().url(url) var req = HttpRequest.newBuilder()
.addHeader("User-agent", "test.marginalia.nu") .uri(new java.net.URI(url))
.addHeader("Accept-Encoding", "gzip") .header("User-agent", "test.marginalia.nu")
.get().build(); .header("Accept-Encoding", "gzip")
.GET().build();
recorder.fetch(httpClient, req); recorder.fetch(httpClient, req);
} }
} }

View File

@@ -2,6 +2,7 @@ package nu.marginalia.crawl.retreival.fetcher;
import com.sun.net.httpserver.HttpServer; import com.sun.net.httpserver.HttpServer;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcher; import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
@@ -79,7 +80,7 @@ class ContentTypeProberTest {
htmlRedirEndpoint = EdgeUrl.parse("http://localhost:" + port + "/redir.gz").get(); htmlRedirEndpoint = EdgeUrl.parse("http://localhost:" + port + "/redir.gz").get();
fetcher = new HttpFetcherImpl("test"); fetcher = new HttpFetcherImpl("test");
recorder = new WarcRecorder(warcFile); recorder = new WarcRecorder(warcFile, new Cookies());
} }
@AfterEach @AfterEach

View File

@@ -2,13 +2,11 @@ package nu.marginalia.crawl.retreival.fetcher;
import nu.marginalia.UserAgent; import nu.marginalia.UserAgent;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.socket.IpInterceptingNetworkInterceptor; import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileReader; import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileReader;
import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter; import nu.marginalia.parquet.crawldata.CrawledDocumentParquetRecordFileWriter;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import org.junit.jupiter.api.AfterEach; import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
@@ -19,6 +17,8 @@ import org.netpreserve.jwarc.WarcXResponseReference;
import java.io.IOException; import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
@@ -31,17 +31,16 @@ class WarcRecorderTest {
Path fileNameWarc; Path fileNameWarc;
Path fileNameParquet; Path fileNameParquet;
WarcRecorder client; WarcRecorder client;
OkHttpClient httpClient;
HttpClient httpClient;
@BeforeEach @BeforeEach
public void setUp() throws Exception { public void setUp() throws Exception {
httpClient = new OkHttpClient.Builder() httpClient = HttpClient.newBuilder().build();
.addNetworkInterceptor(new IpInterceptingNetworkInterceptor())
.build();
fileNameWarc = Files.createTempFile("test", ".warc"); fileNameWarc = Files.createTempFile("test", ".warc");
fileNameParquet = Files.createTempFile("test", ".parquet"); fileNameParquet = Files.createTempFile("test", ".parquet");
client = new WarcRecorder(fileNameWarc); client = new WarcRecorder(fileNameWarc, new Cookies());
} }
@AfterEach @AfterEach
@@ -52,10 +51,13 @@ class WarcRecorderTest {
@Test @Test
void fetch() throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException { void fetch() throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
client.fetch(httpClient, new Request.Builder().url("https://www.marginalia.nu/") client.fetch(httpClient,
.addHeader("User-agent", "test.marginalia.nu") HttpRequest.newBuilder()
.addHeader("Accept-Encoding", "gzip") .uri(new java.net.URI("https://www.marginalia.nu/"))
.get().build()); .header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build()
);
Map<String, String> sampleData = new HashMap<>(); Map<String, String> sampleData = new HashMap<>();
try (var warcReader = new WarcReader(fileNameWarc)) { try (var warcReader = new WarcReader(fileNameWarc)) {
@@ -76,11 +78,11 @@ class WarcRecorderTest {
@Test @Test
public void flagAsSkipped() throws IOException, URISyntaxException { public void flagAsSkipped() throws IOException, URISyntaxException {
try (var recorder = new WarcRecorder(fileNameWarc)) { try (var recorder = new WarcRecorder(fileNameWarc, new Cookies())) {
recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"), recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"),
"text/html", "text/html",
200, 200,
"<?doctype html><html><body>test</body></html>", "<?doctype html><html><body>test</body></html>".getBytes(),
null, null,
ContentTags.empty()); ContentTags.empty());
} }
@@ -100,7 +102,7 @@ class WarcRecorderTest {
@Test @Test
public void flagAsSkippedNullBody() throws IOException, URISyntaxException { public void flagAsSkippedNullBody() throws IOException, URISyntaxException {
try (var recorder = new WarcRecorder(fileNameWarc)) { try (var recorder = new WarcRecorder(fileNameWarc, new Cookies())) {
recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"), recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"),
"text/html", "text/html",
200, 200,
@@ -112,11 +114,11 @@ class WarcRecorderTest {
@Test @Test
public void testSaveImport() throws URISyntaxException, IOException { public void testSaveImport() throws URISyntaxException, IOException {
try (var recorder = new WarcRecorder(fileNameWarc)) { try (var recorder = new WarcRecorder(fileNameWarc, new Cookies())) {
recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"), recorder.writeReferenceCopy(new EdgeUrl("https://www.marginalia.nu/"),
"text/html", "text/html",
200, 200,
"<?doctype html><html><body>test</body></html>", "<?doctype html><html><body>test</body></html>".getBytes(),
null, ContentTags.empty()); null, ContentTags.empty());
} }
@@ -136,19 +138,23 @@ class WarcRecorderTest {
@Test @Test
public void testConvertToParquet() throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException { public void testConvertToParquet() throws NoSuchAlgorithmException, IOException, URISyntaxException, InterruptedException {
client.fetch(httpClient, new Request.Builder().url("https://www.marginalia.nu/") client.fetch(httpClient, HttpRequest.newBuilder()
.addHeader("User-agent", "test.marginalia.nu") .uri(new java.net.URI("https://www.marginalia.nu/"))
.addHeader("Accept-Encoding", "gzip") .header("User-agent", "test.marginalia.nu")
.get().build()); .header("Accept-Encoding", "gzip")
client.fetch(httpClient, new Request.Builder().url("https://www.marginalia.nu/log/") .GET().build());
.addHeader("User-agent", "test.marginalia.nu")
.addHeader("Accept-Encoding", "gzip") client.fetch(httpClient, HttpRequest.newBuilder()
.get().build()); .uri(new java.net.URI("https://www.marginalia.nu/log/"))
client.fetch(httpClient, new Request.Builder().url("https://www.marginalia.nu/sanic.png") .header("User-agent", "test.marginalia.nu")
.addHeader("User-agent", "test.marginalia.nu") .header("Accept-Encoding", "gzip")
.addHeader("Accept-Encoding", "gzip") .GET().build());
.get().build());
client.close(); client.fetch(httpClient, HttpRequest.newBuilder()
.uri(new java.net.URI("https://www.marginalia.nu/sanic.png"))
.header("User-agent", "test.marginalia.nu")
.header("Accept-Encoding", "gzip")
.GET().build());
CrawledDocumentParquetRecordFileWriter.convertWarc( CrawledDocumentParquetRecordFileWriter.convertWarc(
"www.marginalia.nu", "www.marginalia.nu",

View File

@@ -4,6 +4,7 @@ import nu.marginalia.crawl.fetcher.ContentTags;
import nu.marginalia.crawl.fetcher.HttpFetcher; import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
import nu.marginalia.model.body.ContentTypeLogic; import nu.marginalia.model.body.ContentTypeLogic;
import nu.marginalia.model.body.DocumentBodyExtractor; import nu.marginalia.model.body.DocumentBodyExtractor;
@@ -37,6 +38,12 @@ class HttpFetcherTest {
} }
} }
@Test
void testSitemapMarginalia() {
var fetcher = new HttpFetcherImpl("nu.marginalia.edge-crawler");
fetcher.fetchSitemapUrls("https://www.marginalia.nu/sitemap.xml", new CrawlDelayTimer(1)).forEach(System.out::println);
}
@Test @Test
void fetchText() throws Exception { void fetchText() throws Exception {
var fetcher = new HttpFetcherImpl("nu.marginalia.edge-crawler"); var fetcher = new HttpFetcherImpl("nu.marginalia.edge-crawler");

View File

@@ -3,11 +3,9 @@ package nu.marginalia.crawling.retreival;
import crawlercommons.robots.SimpleRobotRules; import crawlercommons.robots.SimpleRobotRules;
import nu.marginalia.crawl.CrawlerMain; import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb; import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.ContentTags; import nu.marginalia.crawl.fetcher.*;
import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.SitemapRetriever;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.CrawlDelayTimer;
import nu.marginalia.crawl.retreival.CrawlerRetreiver; import nu.marginalia.crawl.retreival.CrawlerRetreiver;
import nu.marginalia.crawl.retreival.DomainProber; import nu.marginalia.crawl.retreival.DomainProber;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
@@ -17,7 +15,6 @@ import nu.marginalia.model.crawldata.CrawledDocument;
import nu.marginalia.model.crawldata.CrawlerDocumentStatus; import nu.marginalia.model.crawldata.CrawlerDocumentStatus;
import nu.marginalia.model.crawldata.SerializableCrawlData; import nu.marginalia.model.crawldata.SerializableCrawlData;
import nu.marginalia.test.CommonTestData; import nu.marginalia.test.CommonTestData;
import okhttp3.Headers;
import org.junit.jupiter.api.AfterEach; import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
@@ -27,6 +24,7 @@ import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.net.http.HttpHeaders;
import java.nio.file.Files; import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.sql.SQLException; import java.sql.SQLException;
@@ -122,7 +120,7 @@ public class CrawlerMockFetcherTest {
public void setAllowAllContentTypes(boolean allowAllContentTypes) {} public void setAllowAllContentTypes(boolean allowAllContentTypes) {}
@Override @Override
public List<String> getCookies() { return List.of();} public Cookies getCookies() { return new Cookies();}
@Override @Override
public void clearCookies() {} public void clearCookies() {}
@@ -143,13 +141,13 @@ public class CrawlerMockFetcherTest {
public HttpFetchResult fetchContent(EdgeUrl url, WarcRecorder recorder, ContentTags tags, ProbeType probeType) { public HttpFetchResult fetchContent(EdgeUrl url, WarcRecorder recorder, ContentTags tags, ProbeType probeType) {
logger.info("Fetching {}", url); logger.info("Fetching {}", url);
if (mockData.containsKey(url)) { if (mockData.containsKey(url)) {
byte[] bodyBytes = mockData.get(url).documentBody.getBytes(); byte[] bodyBytes = mockData.get(url).documentBodyBytes;
try { try {
return new HttpFetchResult.ResultOk( return new HttpFetchResult.ResultOk(
url.asURI(), url.asURI(),
200, 200,
new Headers.Builder().build(), HttpHeaders.of(Map.of(), (k,v)->true),
"127.0.0.1", "127.0.0.1",
bodyBytes, bodyBytes,
0, 0,
@@ -164,6 +162,11 @@ public class CrawlerMockFetcherTest {
return new HttpFetchResult.ResultNone(); return new HttpFetchResult.ResultNone();
} }
@Override
public List<EdgeUrl> fetchSitemapUrls(String rootSitemapUrl, CrawlDelayTimer delayTimer) {
return List.of();
}
@Override @Override
public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) { public SimpleRobotRules fetchRobotRules(EdgeDomain domain, WarcRecorder recorder) {
return new SimpleRobotRules(); return new SimpleRobotRules();
@@ -174,5 +177,9 @@ public class CrawlerMockFetcherTest {
return Mockito.mock(SitemapRetriever.class); return Mockito.mock(SitemapRetriever.class);
} }
@Override
public void close() {
}
} }
} }

View File

@@ -5,11 +5,11 @@ import nu.marginalia.WmsaHome;
import nu.marginalia.atags.model.DomainLinks; import nu.marginalia.atags.model.DomainLinks;
import nu.marginalia.crawl.CrawlerMain; import nu.marginalia.crawl.CrawlerMain;
import nu.marginalia.crawl.DomainStateDb; import nu.marginalia.crawl.DomainStateDb;
import nu.marginalia.crawl.fetcher.Cookies;
import nu.marginalia.crawl.fetcher.HttpFetcher; import nu.marginalia.crawl.fetcher.HttpFetcher;
import nu.marginalia.crawl.fetcher.HttpFetcherImpl; import nu.marginalia.crawl.fetcher.HttpFetcherImpl;
import nu.marginalia.crawl.fetcher.warc.WarcRecorder; import nu.marginalia.crawl.fetcher.warc.WarcRecorder;
import nu.marginalia.crawl.retreival.*; import nu.marginalia.crawl.retreival.*;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
import nu.marginalia.model.EdgeUrl; import nu.marginalia.model.EdgeUrl;
@@ -180,7 +180,7 @@ class CrawlerRetreiverTest {
new EdgeDomain("www.marginalia.nu"), new EdgeDomain("www.marginalia.nu"),
List.of(), 100); List.of(), 100);
var resync = new CrawlerWarcResynchronizer(revisitCrawlFrontier, var resync = new CrawlerWarcResynchronizer(revisitCrawlFrontier,
new WarcRecorder(tempFileWarc2) new WarcRecorder(tempFileWarc2, new Cookies())
); );
// truncate the size of the file to simulate a crash // truncate the size of the file to simulate a crash
@@ -226,7 +226,7 @@ class CrawlerRetreiverTest {
convertToParquet(tempFileWarc1, tempFileParquet1); convertToParquet(tempFileWarc1, tempFileParquet1);
try (var stream = CrawledDomainReader.createDataStream(tempFileParquet1)) { try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
while (stream.hasNext()) { while (stream.hasNext()) {
if (stream.next() instanceof CrawledDocument doc) { if (stream.next() instanceof CrawledDocument doc) {
data.add(doc); data.add(doc);
@@ -279,7 +279,7 @@ class CrawlerRetreiverTest {
convertToParquet(tempFileWarc1, tempFileParquet1); convertToParquet(tempFileWarc1, tempFileParquet1);
try (var stream = CrawledDomainReader.createDataStream(tempFileParquet1)) { try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
while (stream.hasNext()) { while (stream.hasNext()) {
if (stream.next() instanceof CrawledDocument doc) { if (stream.next() instanceof CrawledDocument doc) {
data.add(doc); data.add(doc);
@@ -328,7 +328,7 @@ class CrawlerRetreiverTest {
doCrawl(tempFileWarc1, specs); doCrawl(tempFileWarc1, specs);
convertToParquet(tempFileWarc1, tempFileParquet1); convertToParquet(tempFileWarc1, tempFileParquet1);
try (var stream = CrawledDomainReader.createDataStream(tempFileParquet1)) { try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
while (stream.hasNext()) { while (stream.hasNext()) {
if (stream.next() instanceof CrawledDocument doc) { if (stream.next() instanceof CrawledDocument doc) {
data.add(doc); data.add(doc);
@@ -375,7 +375,7 @@ class CrawlerRetreiverTest {
doCrawl(tempFileWarc1, specs); doCrawl(tempFileWarc1, specs);
convertToParquet(tempFileWarc1, tempFileParquet1); convertToParquet(tempFileWarc1, tempFileParquet1);
doCrawlWithReferenceStream(specs, doCrawlWithReferenceStream(specs,
CrawledDomainReader.createDataStream(tempFileParquet1) new CrawlDataReference(tempFileParquet1)
); );
convertToParquet(tempFileWarc2, tempFileParquet2); convertToParquet(tempFileWarc2, tempFileParquet2);
@@ -396,7 +396,7 @@ class CrawlerRetreiverTest {
}); });
} }
try (var ds = CrawledDomainReader.createDataStream(tempFileParquet2)) { try (var ds = SerializableCrawlDataStream.openDataStream(tempFileParquet2)) {
while (ds.hasNext()) { while (ds.hasNext()) {
var doc = ds.next(); var doc = ds.next();
if (doc instanceof CrawledDomain dr) { if (doc instanceof CrawledDomain dr) {
@@ -438,7 +438,7 @@ class CrawlerRetreiverTest {
convertToParquet(tempFileWarc1, tempFileParquet1); convertToParquet(tempFileWarc1, tempFileParquet1);
try (var stream = CrawledDomainReader.createDataStream(tempFileParquet1)) { try (var stream = SerializableCrawlDataStream.openDataStream(tempFileParquet1)) {
while (stream.hasNext()) { while (stream.hasNext()) {
var doc = stream.next(); var doc = stream.next();
data.computeIfAbsent(doc.getClass(), c -> new ArrayList<>()).add(doc); data.computeIfAbsent(doc.getClass(), c -> new ArrayList<>()).add(doc);
@@ -447,18 +447,16 @@ class CrawlerRetreiverTest {
throw new RuntimeException(e); throw new RuntimeException(e);
} }
var stream = CrawledDomainReader.createDataStream(tempFileParquet1);
System.out.println("---"); System.out.println("---");
doCrawlWithReferenceStream(specs, stream); doCrawlWithReferenceStream(specs, new CrawlDataReference(tempFileParquet1));
var revisitCrawlFrontier = new DomainCrawlFrontier( var revisitCrawlFrontier = new DomainCrawlFrontier(
new EdgeDomain("www.marginalia.nu"), new EdgeDomain("www.marginalia.nu"),
List.of(), 100); List.of(), 100);
var resync = new CrawlerWarcResynchronizer(revisitCrawlFrontier, var resync = new CrawlerWarcResynchronizer(revisitCrawlFrontier,
new WarcRecorder(tempFileWarc3) new WarcRecorder(tempFileWarc3, new Cookies())
); );
// truncate the size of the file to simulate a crash // truncate the size of the file to simulate a crash
@@ -487,7 +485,7 @@ class CrawlerRetreiverTest {
}); });
} }
try (var ds = CrawledDomainReader.createDataStream(tempFileParquet2)) { try (var ds = SerializableCrawlDataStream.openDataStream(tempFileParquet2)) {
while (ds.hasNext()) { while (ds.hasNext()) {
var doc = ds.next(); var doc = ds.next();
if (doc instanceof CrawledDomain dr) { if (doc instanceof CrawledDomain dr) {
@@ -508,12 +506,11 @@ class CrawlerRetreiverTest {
} }
} }
private void doCrawlWithReferenceStream(CrawlerMain.CrawlSpecRecord specs, SerializableCrawlDataStream stream) { private void doCrawlWithReferenceStream(CrawlerMain.CrawlSpecRecord specs, CrawlDataReference reference) {
try (var recorder = new WarcRecorder(tempFileWarc2); try (var recorder = new WarcRecorder(tempFileWarc2, new Cookies());
var db = new DomainStateDb(tempFileDb) var db = new DomainStateDb(tempFileDb)
) { ) {
new CrawlerRetreiver(httpFetcher, new DomainProber(d -> true), specs, db, recorder).crawlDomain(new DomainLinks(), new CrawlerRetreiver(httpFetcher, new DomainProber(d -> true), specs, db, recorder).crawlDomain(new DomainLinks(), reference);
new CrawlDataReference(stream));
} }
catch (IOException | SQLException ex) { catch (IOException | SQLException ex) {
Assertions.fail(ex); Assertions.fail(ex);
@@ -522,7 +519,7 @@ class CrawlerRetreiverTest {
@NotNull @NotNull
private DomainCrawlFrontier doCrawl(Path tempFileWarc1, CrawlerMain.CrawlSpecRecord specs) { private DomainCrawlFrontier doCrawl(Path tempFileWarc1, CrawlerMain.CrawlSpecRecord specs) {
try (var recorder = new WarcRecorder(tempFileWarc1); try (var recorder = new WarcRecorder(tempFileWarc1, new Cookies());
var db = new DomainStateDb(tempFileDb) var db = new DomainStateDb(tempFileDb)
) { ) {
var crawler = new CrawlerRetreiver(httpFetcher, new DomainProber(d -> true), specs, db, recorder); var crawler = new CrawlerRetreiver(httpFetcher, new DomainProber(d -> true), specs, db, recorder);

View File

@@ -3,7 +3,6 @@ package nu.marginalia.extractor;
import com.google.inject.Inject; import com.google.inject.Inject;
import gnu.trove.set.hash.TLongHashSet; import gnu.trove.set.hash.TLongHashSet;
import nu.marginalia.hash.MurmurHash3_128; import nu.marginalia.hash.MurmurHash3_128;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.link_parser.LinkParser; import nu.marginalia.link_parser.LinkParser;
import nu.marginalia.model.EdgeDomain; import nu.marginalia.model.EdgeDomain;
@@ -14,7 +13,6 @@ import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage; import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId; import nu.marginalia.storage.model.FileStorageId;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.jsoup.Jsoup;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -52,17 +50,15 @@ public class AtagExporter implements ExporterIf {
try (var bw = new BufferedWriter(new OutputStreamWriter(new GZIPOutputStream(Files.newOutputStream(tmpFile, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING))))) try (var bw = new BufferedWriter(new OutputStreamWriter(new GZIPOutputStream(Files.newOutputStream(tmpFile, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING)))))
{ {
Path crawlerLogFile = inputDir.resolve("crawler.log");
var tagWriter = new ATagCsvWriter(bw); var tagWriter = new ATagCsvWriter(bw);
for (var item : WorkLog.iterable(crawlerLogFile)) { for (var item : WorkLog.iterable(inputDir.resolve("crawler.log"))) {
if (Thread.interrupted()) { if (Thread.interrupted()) {
throw new InterruptedException(); throw new InterruptedException();
} }
Path crawlDataPath = inputDir.resolve(item.relPath()); Path crawlDataPath = inputDir.resolve(item.relPath());
try (var stream = CrawledDomainReader.createDataStream(crawlDataPath)) { try (var stream = SerializableCrawlDataStream.openDataStream(crawlDataPath)) {
exportLinks(tagWriter, stream); exportLinks(tagWriter, stream);
} }
catch (Exception ex) { catch (Exception ex) {
@@ -89,15 +85,19 @@ public class AtagExporter implements ExporterIf {
while (stream.hasNext()) { while (stream.hasNext()) {
if (!(stream.next() instanceof CrawledDocument doc)) if (!(stream.next() instanceof CrawledDocument doc))
continue; continue;
if (null == doc.documentBody) if (!doc.hasBody())
continue; continue;
if (!doc.contentType.toLowerCase().startsWith("text/html")) if (!doc.contentType.toLowerCase().startsWith("text/html"))
continue; continue;
var baseUrl = new EdgeUrl(doc.url); var baseUrl = new EdgeUrl(doc.url);
var parsed = Jsoup.parse(doc.documentBody); var parsed = doc.parseBody();
for (var atag : parsed.getElementsByTag("a")) { for (var atag : parsed.getElementsByTag("a")) {
if (!atag.hasAttr("href")) {
continue;
}
String linkText = atag.text(); String linkText = atag.text();
if (!linkFilter.isLinkTextEligible(linkText)) { if (!linkFilter.isLinkTextEligible(linkText)) {

View File

@@ -1,7 +1,6 @@
package nu.marginalia.extractor; package nu.marginalia.extractor;
import com.google.inject.Inject; import com.google.inject.Inject;
import nu.marginalia.io.CrawledDomainReader;
import nu.marginalia.io.SerializableCrawlDataStream; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.link_parser.FeedExtractor; import nu.marginalia.link_parser.FeedExtractor;
import nu.marginalia.link_parser.LinkParser; import nu.marginalia.link_parser.LinkParser;
@@ -12,7 +11,6 @@ import nu.marginalia.process.log.WorkLog;
import nu.marginalia.storage.FileStorageService; import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage; import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId; import nu.marginalia.storage.model.FileStorageId;
import org.jsoup.Jsoup;
import java.io.BufferedWriter; import java.io.BufferedWriter;
import java.io.IOException; import java.io.IOException;
@@ -57,7 +55,7 @@ public class FeedExporter implements ExporterIf {
} }
Path crawlDataPath = inputDir.resolve(item.relPath()); Path crawlDataPath = inputDir.resolve(item.relPath());
try (var stream = CrawledDomainReader.createDataStream(crawlDataPath)) { try (var stream = SerializableCrawlDataStream.openDataStream(crawlDataPath)) {
exportFeeds(tagWriter, stream); exportFeeds(tagWriter, stream);
} }
catch (Exception ex) { catch (Exception ex) {
@@ -76,18 +74,18 @@ public class FeedExporter implements ExporterIf {
private boolean exportFeeds(FeedCsvWriter exporter, SerializableCrawlDataStream stream) throws IOException, URISyntaxException { private boolean exportFeeds(FeedCsvWriter exporter, SerializableCrawlDataStream stream) throws IOException, URISyntaxException {
FeedExtractor feedExtractor = new FeedExtractor(new LinkParser()); FeedExtractor feedExtractor = new FeedExtractor(new LinkParser());
int size = stream.sizeHint(); int size = stream.getSizeHint();
while (stream.hasNext()) { while (stream.hasNext()) {
if (!(stream.next() instanceof CrawledDocument doc)) if (!(stream.next() instanceof CrawledDocument doc))
continue; continue;
if (null == doc.documentBody) if (!doc.hasBody())
continue; continue;
if (!doc.contentType.toLowerCase().startsWith("text/html")) if (!doc.contentType.toLowerCase().startsWith("text/html"))
continue; continue;
var baseUrl = new EdgeUrl(doc.url); var baseUrl = new EdgeUrl(doc.url);
var parsed = Jsoup.parse(doc.documentBody); var parsed = doc.parseBody();
List<EdgeUrl> feedUrls = new ArrayList<>(); List<EdgeUrl> feedUrls = new ArrayList<>();
for (var link : parsed.select("link[rel=alternate]")) { for (var link : parsed.select("link[rel=alternate]")) {

View File

@@ -5,7 +5,7 @@ import gnu.trove.map.hash.TLongIntHashMap;
import gnu.trove.set.hash.TLongHashSet; import gnu.trove.set.hash.TLongHashSet;
import nu.marginalia.WmsaHome; import nu.marginalia.WmsaHome;
import nu.marginalia.converting.processor.logic.dom.DomPruningFilter; import nu.marginalia.converting.processor.logic.dom.DomPruningFilter;
import nu.marginalia.io.CrawledDomainReader; import nu.marginalia.io.SerializableCrawlDataStream;
import nu.marginalia.language.filter.LanguageFilter; import nu.marginalia.language.filter.LanguageFilter;
import nu.marginalia.language.model.DocumentLanguageData; import nu.marginalia.language.model.DocumentLanguageData;
import nu.marginalia.language.sentence.SentenceExtractor; import nu.marginalia.language.sentence.SentenceExtractor;
@@ -15,7 +15,6 @@ import nu.marginalia.storage.FileStorageService;
import nu.marginalia.storage.model.FileStorage; import nu.marginalia.storage.model.FileStorage;
import nu.marginalia.storage.model.FileStorageId; import nu.marginalia.storage.model.FileStorageId;
import nu.marginalia.util.SimpleBlockingThreadPool; import nu.marginalia.util.SimpleBlockingThreadPool;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document; import org.jsoup.nodes.Document;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -104,19 +103,19 @@ public class TermFrequencyExporter implements ExporterIf {
{ {
TLongHashSet words = new TLongHashSet(1000); TLongHashSet words = new TLongHashSet(1000);
try (var stream = CrawledDomainReader.createDataStream(crawlDataPath)) { try (var stream = SerializableCrawlDataStream.openDataStream(crawlDataPath)) {
while (stream.hasNext()) { while (stream.hasNext()) {
if (Thread.interrupted()) if (Thread.interrupted())
return; return;
if (!(stream.next() instanceof CrawledDocument doc)) continue; if (!(stream.next() instanceof CrawledDocument doc)) continue;
if (doc.documentBody == null) continue; if (!doc.hasBody()) continue;
if (!doc.contentType.toLowerCase().startsWith("text/html")) if (!doc.contentType.toLowerCase().startsWith("text/html"))
continue; continue;
docCount.incrementAndGet(); docCount.incrementAndGet();
Document parsed = Jsoup.parse(doc.documentBody); Document parsed = doc.parseBody();
parsed.body().filter(new DomPruningFilter(0.5)); parsed.body().filter(new DomPruningFilter(0.5));
DocumentLanguageData dld = se.extractSentences(parsed); DocumentLanguageData dld = se.extractSentences(parsed);

View File

@@ -56,7 +56,6 @@ dependencies {
implementation libs.zstd implementation libs.zstd
implementation libs.jwarc implementation libs.jwarc
implementation libs.crawlercommons implementation libs.crawlercommons
implementation libs.okhttp3
implementation libs.jsoup implementation libs.jsoup
implementation libs.opencsv implementation libs.opencsv
implementation libs.fastutil implementation libs.fastutil

View File

@@ -119,12 +119,16 @@ public class LiveCrawlDataSet implements AutoCloseable {
} }
} }
private String decompress(byte[] data) { private String decompressStr(byte[] data) {
return new String(decompressBytes(data));
}
private byte[] decompressBytes(byte[] data) {
// gzip decompression // gzip decompression
try (var bis = new ByteArrayInputStream(data); try (ByteArrayInputStream bis = new ByteArrayInputStream(data);
var gzip = new GZIPInputStream(bis)) GZIPInputStream gzip = new GZIPInputStream(bis))
{ {
return new String(gzip.readAllBytes()); return gzip.readAllBytes();
} }
catch (IOException ex) { catch (IOException ex) {
throw new RuntimeException(ex); throw new RuntimeException(ex);
@@ -177,8 +181,8 @@ public class LiveCrawlDataSet implements AutoCloseable {
dataStack = new ArrayList<>(); dataStack = new ArrayList<>();
while (rs.next()) { while (rs.next()) {
String url = rs.getString("url"); String url = rs.getString("url");
String body = decompress(rs.getBytes("body")); byte[] body = decompressBytes(rs.getBytes("body"));
String headers = decompress(rs.getBytes("headers")); String headers = decompressStr(rs.getBytes("headers"));
dataStack.add(new CrawledDocument( dataStack.add(new CrawledDocument(
"LIVE", "LIVE",
@@ -224,7 +228,7 @@ public class LiveCrawlDataSet implements AutoCloseable {
} }
@Override @Override
public boolean hasNext() throws IOException { public boolean hasNext() {
if (dataStack == null) { if (dataStack == null) {
query(); query();
} }
@@ -232,7 +236,7 @@ public class LiveCrawlDataSet implements AutoCloseable {
} }
@Override @Override
public void close() throws Exception { public void close() {
dataStack.clear(); dataStack.clear();
} }
} }

View File

@@ -200,7 +200,7 @@ public class LiveCrawlerMain extends ProcessMainClass {
writer.setOrdinalOffset(67_000_000); writer.setOrdinalOffset(67_000_000);
for (SerializableCrawlDataStream stream : hb.wrap("Processing", dataSet.getDataStreams())) { for (SerializableCrawlDataStream stream : hb.wrap("Processing", dataSet.getDataStreams())) {
writer.write(domainProcessor.sideloadProcessing(stream, 0, Set.of("special:live"))); writer.write(domainProcessor.simpleProcessing(stream, 0, Set.of("special:live")));
} }
} }

View File

@@ -51,7 +51,7 @@ public class LiveCrawlDataSetTest {
case CrawledDocument document -> { case CrawledDocument document -> {
dataCount++; dataCount++;
Assertions.assertEquals("https://www.example.com/", document.url); Assertions.assertEquals("https://www.example.com/", document.url);
Assertions.assertEquals("test", document.documentBody); Assertions.assertEquals("test", document.documentBody());
} }
} }
} }

View File

@@ -49,7 +49,7 @@ class SimpleLinkScraperTest {
List<CrawledDocument> documents = firstStream.docsAsList(); List<CrawledDocument> documents = firstStream.docsAsList();
Assertions.assertEquals(1, documents.size()); Assertions.assertEquals(1, documents.size());
Assertions.assertTrue(documents.getFirst().documentBody.startsWith("<!doctype")); Assertions.assertTrue(documents.getFirst().documentBody().startsWith("<!doctype"));
} }

View File

@@ -3,7 +3,7 @@ plugins {
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
java { java {

View File

@@ -3,7 +3,7 @@ plugins {
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
application { application {

View File

@@ -3,7 +3,7 @@ plugins {
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
application { application {

View File

@@ -5,7 +5,7 @@ plugins {
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
application { application {

View File

@@ -3,7 +3,7 @@ plugins {
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'gg.jte.gradle' version '3.1.15' id 'gg.jte.gradle' version '3.1.15'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
application { application {
@@ -104,6 +104,8 @@ task compileTailwind {
doLast { doLast {
exec { exec {
// If you're getting a build error like 'npm error could not determine executable to run'
// pointing you here, you need to run `npm install -D tailwindcss`
workingDir projectDir workingDir projectDir
if (System.getProperty('os.name').toLowerCase().contains('windows')) { if (System.getProperty('os.name').toLowerCase().contains('windows')) {
commandLine 'cmd', '/c', 'npx', 'tailwindcss', commandLine 'cmd', '/c', 'npx', 'tailwindcss',

View File

@@ -84,18 +84,33 @@ public record SearchParameters(WebsiteUrl url,
} }
public String renderUrl() { public String renderUrl() {
String path = String.format("/search?query=%s&profile=%s&js=%s&adtech=%s&recent=%s&searchTitle=%s&newfilter=%s&page=%d",
URLEncoder.encode(query, StandardCharsets.UTF_8),
URLEncoder.encode(profile.filterId, StandardCharsets.UTF_8),
URLEncoder.encode(js.value, StandardCharsets.UTF_8),
URLEncoder.encode(adtech.value, StandardCharsets.UTF_8),
URLEncoder.encode(recent.value, StandardCharsets.UTF_8),
URLEncoder.encode(searchTitle.value, StandardCharsets.UTF_8),
Boolean.valueOf(newFilter).toString(),
page
);
return path; StringBuilder pathBuilder = new StringBuilder("/search?");
pathBuilder.append("query=").append(URLEncoder.encode(query, StandardCharsets.UTF_8));
if (profile != SearchProfile.NO_FILTER) {
pathBuilder.append("&profile=").append(URLEncoder.encode(profile.filterId, StandardCharsets.UTF_8));
}
if (js != SearchJsParameter.DEFAULT) {
pathBuilder.append("&js=").append(URLEncoder.encode(js.value, StandardCharsets.UTF_8));
}
if (adtech != SearchAdtechParameter.DEFAULT) {
pathBuilder.append("&adtech=").append(URLEncoder.encode(adtech.value, StandardCharsets.UTF_8));
}
if (recent != SearchRecentParameter.DEFAULT) {
pathBuilder.append("&recent=").append(URLEncoder.encode(recent.value, StandardCharsets.UTF_8));
}
if (searchTitle != SearchTitleParameter.DEFAULT) {
pathBuilder.append("&searchTitle=").append(URLEncoder.encode(searchTitle.value, StandardCharsets.UTF_8));
}
if (page != 1) {
pathBuilder.append("&page=").append(page);
}
if (newFilter) {
pathBuilder.append("&newfilter=").append(Boolean.valueOf(newFilter).toString());
}
return pathBuilder.toString();
} }
public RpcTemporalBias.Bias temporalBias() { public RpcTemporalBias.Bias temporalBias() {

View File

@@ -3,27 +3,22 @@ package nu.marginalia.search.command.commands;
import com.google.inject.Inject; import com.google.inject.Inject;
import io.jooby.MapModelAndView; import io.jooby.MapModelAndView;
import io.jooby.ModelAndView; import io.jooby.ModelAndView;
import nu.marginalia.search.JteRenderer;
import nu.marginalia.search.SearchOperator; import nu.marginalia.search.SearchOperator;
import nu.marginalia.search.command.SearchCommandInterface; import nu.marginalia.search.command.SearchCommandInterface;
import nu.marginalia.search.command.SearchParameters; import nu.marginalia.search.command.SearchParameters;
import nu.marginalia.search.model.DecoratedSearchResults; import nu.marginalia.search.model.DecoratedSearchResults;
import nu.marginalia.search.model.NavbarModel; import nu.marginalia.search.model.NavbarModel;
import java.io.IOException;
import java.util.Map; import java.util.Map;
import java.util.Optional; import java.util.Optional;
public class SearchCommand implements SearchCommandInterface { public class SearchCommand implements SearchCommandInterface {
private final SearchOperator searchOperator; private final SearchOperator searchOperator;
private final JteRenderer jteRenderer;
@Inject @Inject
public SearchCommand(SearchOperator searchOperator, public SearchCommand(SearchOperator searchOperator){
JteRenderer jteRenderer) throws IOException {
this.searchOperator = searchOperator; this.searchOperator = searchOperator;
this.jteRenderer = jteRenderer;
} }
@Override @Override

View File

@@ -28,6 +28,7 @@ import org.slf4j.LoggerFactory;
import java.sql.SQLException; import java.sql.SQLException;
import java.util.*; import java.util.*;
import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future; import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.function.Supplier; import java.util.function.Supplier;
@@ -67,8 +68,12 @@ public class SearchSiteInfoService {
this.screenshotService = screenshotService; this.screenshotService = screenshotService;
this.dataSource = dataSource; this.dataSource = dataSource;
this.searchSiteSubscriptions = searchSiteSubscriptions; this.searchSiteSubscriptions = searchSiteSubscriptions;
Thread.ofPlatform().name("Recently Added Domains Model Updater").start(this::modelUpdater);
} }
private volatile SiteOverviewModel cachedOverviewModel = new SiteOverviewModel(List.of());
@GET @GET
@Path("/site") @Path("/site")
public ModelAndView<?> handleOverview(@QueryParam String domain) { public ModelAndView<?> handleOverview(@QueryParam String domain) {
@@ -77,23 +82,48 @@ public class SearchSiteInfoService {
return new MapModelAndView("redirect.jte", Map.of("url", "/site/"+domain)); return new MapModelAndView("redirect.jte", Map.of("url", "/site/"+domain));
} }
List<SiteOverviewModel.DiscoveredDomain> domains = new ArrayList<>();
try (var conn = dataSource.getConnection();
var stmt = conn.prepareStatement("SELECT DOMAIN_NAME, DISCOVER_DATE FROM EC_DOMAIN WHERE NODE_AFFINITY = 0 ORDER BY ID DESC LIMIT 10")) {
var rs = stmt.executeQuery();
while (rs.next()) {
domains.add(new SiteOverviewModel.DiscoveredDomain(rs.getString("DOMAIN_NAME"), rs.getString("DISCOVER_DATE")));
}
}
catch (SQLException ex) {
throw new RuntimeException();
}
return new MapModelAndView("siteinfo/start.jte", return new MapModelAndView("siteinfo/start.jte",
Map.of("navbar", NavbarModel.SITEINFO, Map.of("navbar", NavbarModel.SITEINFO,
"model", new SiteOverviewModel(domains))); "model", cachedOverviewModel));
}
private void modelUpdater() {
while (!Thread.interrupted()) {
List<SiteOverviewModel.DiscoveredDomain> domains = new ArrayList<>();
// This query can be quite expensive, so we can't run it on demand
// for every request. Instead, we run it every 15 minutes and cache
// the result.
try (var conn = dataSource.getConnection();
var stmt = conn.prepareStatement("""
SELECT DOMAIN_NAME, DISCOVER_DATE
FROM EC_DOMAIN
WHERE NODE_AFFINITY = 0
ORDER BY ID DESC
LIMIT 10
"""))
{
var rs = stmt.executeQuery();
while (rs.next()) {
domains.add(new SiteOverviewModel.DiscoveredDomain(
rs.getString("DOMAIN_NAME"),
rs.getString("DISCOVER_DATE"))
);
}
} catch (SQLException ex) {
logger.warn("Failed to get recently added domains: {}", ex.getMessage());
}
cachedOverviewModel = new SiteOverviewModel(domains);
try {
TimeUnit.MINUTES.sleep(15);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
}
} }
public record SiteOverviewModel(List<DiscoveredDomain> domains) { public record SiteOverviewModel(List<DiscoveredDomain> domains) {
@@ -107,10 +137,11 @@ public class SearchSiteInfoService {
@PathParam String domainName, @PathParam String domainName,
@QueryParam String view, @QueryParam String view,
@QueryParam Integer page @QueryParam Integer page
) throws SQLException { ) throws SQLException, ExecutionException {
if (null == domainName || domainName.isBlank()) { if (null == domainName || domainName.isBlank()) {
return null; // If we don't get a domain name, we redirect to the /site endpoint
return new MapModelAndView("redirect.jte", Map.of("url", "/site"));
} }
page = Objects.requireNonNullElse(page, 1); page = Objects.requireNonNullElse(page, 1);
@@ -193,7 +224,7 @@ public class SearchSiteInfoService {
); );
} }
private SiteInfoWithContext listInfo(Context context, String domainName) { private SiteInfoWithContext listInfo(Context context, String domainName) throws ExecutionException {
var domain = new EdgeDomain(domainName); var domain = new EdgeDomain(domainName);
final int domainId = domainQueries.tryGetDomainId(domain).orElse(-1); final int domainId = domainQueries.tryGetDomainId(domain).orElse(-1);

View File

@@ -86,7 +86,7 @@
@endif @endif
@if(result.getFirst().isTracking()) @if(result.getFirst().isTracking())
<span class="px-1 bg-yellow-100 text-yellow-700 dark:border dark:border-yellow-600 dark:text-yellow-400 dark:bg-black rounded" title="Uses tracking scripts">Track</span> <span class="px-1 bg-yellow-100 text-yellow-700 dark:border dark:border-yellow-600 dark:text-yellow-400 dark:bg-black rounded" title="Uses tracking scripts">Tracking</span>
@endif @endif
@if(result.getFirst().isScripts()) @if(result.getFirst().isScripts())
@@ -94,11 +94,11 @@
@endif @endif
@if(result.getFirst().isAds()) @if(result.getFirst().isAds())
<span class="px-1 bg-red-100 text-red-700 dark:border dark:border-red-600 dark:text-red-400 dark:bg-black rounded" title="Contains adtech">Ads</span> <span class="px-1 bg-red-100 text-red-700 dark:border dark:border-red-600 dark:text-red-400 dark:bg-black rounded" title="Contains adtech">Has Ads</span>
@endif @endif
@if(result.getFirst().isAffiliate()) @if(result.getFirst().isAffiliate())
<span class="px-1 bg-red-100 text-red-700 dark:border dark:border-red-600 dark:text-red-400 dark:bg-black rounded" title="Contains Affiliate Link">Affiliate</span> <span class="px-1 bg-red-100 text-red-700 dark:border dark:border-red-600 dark:text-red-400 dark:bg-black rounded" title="Contains Affiliate Link">Has Affiliate</span>
@endif @endif
</span> </span>

View File

@@ -36,10 +36,11 @@
</div> </div>
@if (filters.showRecentOption.isSet()) <input type="hidden" name="js" value="${filters.removeJsOption.value()}"> @endif
@if (filters.reduceAdtechOption.isSet()) <input type="hidden" name="adtech" value="${filters.reduceAdtechOption.value()}"> @endif
@if (filters.searchTitleOption.isSet()) <input type="hidden" name="searchTitle" value="${filters.searchTitleOption.value()}"> @endif
@if (filters.showRecentOption.isSet()) <input type="hidden" name="recent" value="${filters.showRecentOption.value()}"> @endif
<input type="hidden" name="js" value="${filters.removeJsOption.value()}">
<input type="hidden" name="adtech" value="${filters.reduceAdtechOption.value()}">
<input type="hidden" name="searchTitle" value="${filters.searchTitleOption.value()}">
<input type="hidden" name="profile" value="${profile}"> <input type="hidden" name="profile" value="${profile}">
<input type="hidden" name="recent" value="${filters.showRecentOption.value()}">
</form> </form>

View File

@@ -36,7 +36,7 @@
<div class="text-slate-700 dark:text-white text-sm p-4"> <div class="text-slate-700 dark:text-white text-sm p-4">
<div class="fas fa-gift mr-1 text-margeblue dark:text-slate-200"></div> <div class="fas fa-gift mr-1 text-margeblue dark:text-slate-200"></div>
This is the new design and home of Marginalia Search. This is the new design and home of Marginalia Search.
You can about what this entails <a href="https://about.marginalia-search.com/article/redesign/" class="underline text-liteblue dark:text-blue-200">here</a>. You can read about what this entails <a href="https://about.marginalia-search.com/article/redesign/" class="underline text-liteblue dark:text-blue-200">here</a>.
<p class="my-4"></p> <p class="my-4"></p>
The old version of Marginalia Search remains available at The old version of Marginalia Search remains available at
<a href="https://old-search.marginalia.nu/" class="underline text-liteblue dark:text-blue-200">https://old-search.marginalia.nu/</a>. <a href="https://old-search.marginalia.nu/" class="underline text-liteblue dark:text-blue-200">https://old-search.marginalia.nu/</a>.

View File

@@ -53,7 +53,7 @@
@endif @endif
@if(details.isTracking()) @if(details.isTracking())
<span class="px-1 bg-yellow-100 text-yellow-700 dark:border dark:border-yellow-600 dark:text-yellow-400 dark:bg-black rounded" title="Uses tracking scripts">Track</span> <span class="px-1 bg-yellow-100 text-yellow-700 dark:border dark:border-yellow-600 dark:text-yellow-400 dark:bg-black rounded" title="Uses tracking scripts">Tracking</span>
@endif @endif
@if(details.isScripts()) @if(details.isScripts())
@@ -65,7 +65,7 @@
@endif @endif
@if(details.isAffiliate()) @if(details.isAffiliate())
<span class="px-1 bg-red-100 text-red-700 dark:border dark:border-red-600 dark:text-red-400 dark:bg-black rounded" title="Contains Affiliate Link">Affiliate</span> <span class="px-1 bg-red-100 text-red-700 dark:border dark:border-red-600 dark:text-red-400 dark:bg-black rounded" title="Contains Affiliate Link">Has Affiliate</span>
@endif @endif
</div> </div>

View File

@@ -1,5 +1,4 @@
@import nu.marginalia.db.DbDomainQueries @import nu.marginalia.db.DbDomainQueries
@import nu.marginalia.model.EdgeDomain
@import nu.marginalia.search.svc.SearchSiteInfoService @import nu.marginalia.search.svc.SearchSiteInfoService
@import nu.marginalia.search.svc.SearchSiteInfoService.* @import nu.marginalia.search.svc.SearchSiteInfoService.*
@import nu.marginalia.search.model.UrlDetails @import nu.marginalia.search.model.UrlDetails
@@ -81,35 +80,6 @@
@endif @endif
@if (!siteInfo.siblingDomains().isEmpty())
<div class="mx-3 flex place-items-baseline space-x-2 p-2 bg-gray-100 dark:bg-gray-600 rounded">
<i class="fas fa-globe"></i>
<span>Related Subdomains</span>
</div>
<table class="min-w-full divide-y divide-gray-200 dark:divide-gray-600 mx-4">
<thead>
<tr class="bg-gray-50 dark:bg-gray-700">
<th scope="col" class="px-2 py-2 text-left text-xs font-medium text-gray-500 dark:text-gray-100 uppercase tracking-wider">Domain Name</th>
</tr>
</thead>
<tbody class="bg-white dark:bg-gray-800 divide-y divide-gray-200 dark:divide-gray-600 text-xs">
@for (DbDomainQueries.DomainWithNode sibling : siteInfo.siblingDomains())
<tr>
<td class="px-3 py-6 md:py-3 whitespace-nowrap">
<a class="text-liteblue dark:text-blue-200" href="/site/${sibling.domain().toString()}">${sibling.domain().toString()}</a>
@if (!sibling.isIndexed())
<i class="ml-1 fa-regular fa-question-circle text-gray-400 dark:text-gray-600 text-xs" title="Not indexed"></i>
@endif
</td>
</tr>
@endfor
</tbody>
</table>
@endif
@if (siteInfo.domainInformation().isUnknownDomain()) @if (siteInfo.domainInformation().isUnknownDomain())
<div class="mx-3 flex place-items-baseline space-x-2 p-2 bg-gray-100 dark:bg-gray-600 rounded"> <div class="mx-3 flex place-items-baseline space-x-2 p-2 bg-gray-100 dark:bg-gray-600 rounded">
<i class="fa-regular fa-circle-question"></i> <i class="fa-regular fa-circle-question"></i>
@@ -178,6 +148,36 @@
</form> </form>
@endif @endif
@if (!siteInfo.siblingDomains().isEmpty())
<div class="mx-3 flex place-items-baseline space-x-2 p-2 bg-gray-100 dark:bg-gray-600 rounded">
<i class="fas fa-globe"></i>
<span>Related Subdomains</span>
</div>
<table class="min-w-full divide-y divide-gray-200 dark:divide-gray-600 mx-4">
<thead>
<tr class="bg-gray-50 dark:bg-gray-700">
<th scope="col" class="px-2 py-2 text-left text-xs font-medium text-gray-500 dark:text-gray-100 uppercase tracking-wider">Domain Name</th>
</tr>
</thead>
<tbody class="bg-white dark:bg-gray-800 divide-y divide-gray-200 dark:divide-gray-600 text-xs">
@for (DbDomainQueries.DomainWithNode sibling : siteInfo.siblingDomains())
<tr>
<td class="px-3 py-6 md:py-3 whitespace-nowrap">
<a class="text-liteblue dark:text-blue-200" href="/site/${sibling.domain().toString()}">${sibling.domain().toString()}</a>
@if (!sibling.isIndexed())
<i class="ml-1 fa-regular fa-question-circle text-gray-400 dark:text-gray-600 text-xs" title="Not indexed"></i>
@endif
</td>
</tr>
@endfor
</tbody>
</table>
@endif
@if (siteInfo.isKnown()) @if (siteInfo.isKnown())
<div class="mx-3 flex place-items-baseline space-x-2 p-2 bg-gray-100 dark:bg-gray-600 rounded"> <div class="mx-3 flex place-items-baseline space-x-2 p-2 bg-gray-100 dark:bg-gray-600 rounded">
<i class="fas fa-chart-simple"></i> <i class="fas fa-chart-simple"></i>

View File

@@ -2,7 +2,7 @@ plugins {
id 'java' id 'java'
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
java { java {

View File

@@ -3,7 +3,7 @@ plugins {
id 'application' id 'application'
id 'jvm-test-suite' id 'jvm-test-suite'
id 'com.google.cloud.tools.jib' version '3.4.3' id 'com.google.cloud.tools.jib' version '3.4.4'
} }
application { application {

Some files were not shown because too many files have changed in this diff Show More