mirror of
https://github.com/project-slippi/slippi-rust-extensions.git
synced 2025-10-05 23:22:40 +02:00
Display rank information on CSS and Game Setup Screen (#24)
* initial commit * add extern function for sending rank info to dolphin * minor bug fixes to fetch rank * bug fixes to dolphin rank getter * remove unnecessary imports * update rank getter to use new backend * fix rank request parser * clean up get rank ffi * fix bugs with refactor that prevented rank change * clear rank data on logout * minor cleanup * add response status field * wip rework rank manager to use separate thread * more refactoring of fetcher process * finish async fetch flow * fetch cleanup * push lock file * add new response status * misc cleanup, wip porting rust enums * cleanup rust side rank info * remove boxing when returning rank data * remove utils module and fix unused code errors * chore: run autoformatter * chore: remove unused fields * move thiserror to workspace dependency, simplify rank getter return * fix: always load new rating if it exists * Add a dedicated GraphQL call interface. - `APIClient` now has a `.graphql()` method that can be used to build a GraphQL request. - Updated `game-reporter` and `rank-info` to utilize the interface. - Removed/condensed specific error kinds in `game-reporter` and `rank-info` since they can more or less pass through to `GraphQLError` now. - Some initial cleanup of places where a lot of serialization logic was taking place that shouldn't be happening. This requires some testing - Fizzi is likely the best for that. * Minor cleanup * Make RankInfo default an invalid rank, general code cleanup * Remove clone calls where we can just copy, fix ffi variable name I missed * Remove wildcard import * Make RankInfo default 0, not -1. FFI interface subs in -1 instead since that's a C specific bit anyway. * add fetch status output and remove some fields * Remove unused error variant * Debug payload * fix query string * Allow passing in a pointer path to GraphQL requests instead of a single key * add proper fetch path for graphql response * remove fetch that doesnt work * Add support for a custom HTTP timeout on GraphQL requests * Code cleanup and slight reorg. - RankFetcher moved into free-standing functions to reduce boilerplate. - Applied `#[repr()]` tags to some enums to ensure things are consistent over time; they might work now but it's better for us to be sure if we're relying on conversion behavior. - Some further docstrings added, etc. * Refactored this slightly to make the module more organized. This is mostly moving types that are operated on in the rank fetcher into the module itself. All the fetching logic is mostly self contained now, and the lib.rs module acts as the bootstrapping/ffi barrier. This also handles a few small odds and ends, like making sure that we reset the FetchStatus on user sign out (very rare event I guess) and removes a method (`get_rank`) that doesn't appear to be used anymore (`get_rank_with_info` seems to be the one that's used). * Rename this module so it's naming convention is slightly more in line with the other modules * Add in retry logic for rank API calls. This migrates the constant background thread setup to one where we only spawn the background thread when we need to request the rank (i.e on the CSS). This is ultimately cleaner and makes retry logic easier to implement: before this change, leaving and re-entering the CSS screen would likely have hung the client since the background thread would be stuck in network request sleep timeouts and not be able to pick up the `Message::FetchRank` event. (Fizzi should probably tweak the retry timeout per his needs) * Remove extra status set * fix reporting regressions * exec retries if the rank takes time to update * add comment * chore: set fetching status synchronously * remove old fetching status set * fetch match result directly to ensure correctness before it would have been possible for some abandoned matches to get reported while we were in a game and the rating change would not have shown the impact of the last match but rather the impact of all matches combined. This could make it look like someone lost points when they won, for example. This new method avoids all of that. * clean up things that wont be needed soon * Move rank code in to user module * Fmt, fix commit * Refactor user and rank relation. - `UserManager` now holds `RankInfo` directly and threads it through calls that need access to it. - `RankFetcher` encompasses the logic we use for fetching updated rank data and merging it in to existing `RankInfo`. - `UserManager` will now update `RankInfo` on first user info update from server. - General code cleanup and formatting. * Properly (I think) set rank data on client start/sign-in. - This changes the rank fetcher status to default to `Fetched`, since we seem to expect there to be rank data in the initial sign-in request anyway; i.e there shouldn't really be a scenario where ranked data is `NotFetched`. - More general cleanup, I am too tired to write this commit. * fix issue where deserialization of rank could fail * Set rank fetcher status if we fail to overwrite user data from server. - Cleans up `overwrite_from_server` so it's less of a headache to read, and hoists the error so that we can just ensure rank fetcher status is error in the event of any problems. - Fixes some logging statements that didn't have their target set. - Removes some clones that we didn't need to be doing so eagerly. Sometimes I hate programming. * manage fetch status on user load * dont retry login on server failure --------- Co-authored-by: Jas Laferriere <Fizzi36@gmail.com> Co-authored-by: Ryan McGrath <ryan@rymc.io>
This commit is contained in:
6
Cargo.lock
generated
6
Cargo.lock
generated
@@ -1255,6 +1255,10 @@ dependencies = [
|
||||
name = "slippi-gg-api"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"dolphin-integrations",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror",
|
||||
"tracing",
|
||||
"ureq",
|
||||
]
|
||||
@@ -1279,8 +1283,10 @@ dependencies = [
|
||||
"serde",
|
||||
"serde_json",
|
||||
"slippi-gg-api",
|
||||
"thiserror",
|
||||
"time",
|
||||
"tracing",
|
||||
"ureq",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@@ -28,6 +28,7 @@ time = { version = "0.3.41", default-features = false, features = ["formatting",
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json = { version = "1" }
|
||||
serde_repr = { version = "0.1" }
|
||||
thiserror = { version = "1.0.44" }
|
||||
|
||||
# We disable the "attributes" feature as we don't currently need it and it brings
|
||||
# in extra dependencies.
|
||||
|
@@ -45,6 +45,18 @@ typedef struct SlippiRustEXIConfig {
|
||||
void (*osd_add_msg_fn)(const char*, uint32_t, uint32_t);
|
||||
} SlippiRustEXIConfig;
|
||||
|
||||
/**
|
||||
* Rank info that we vend back to the Dolphin side of things.
|
||||
*/
|
||||
typedef struct RustRankInfo {
|
||||
int fetch_status;
|
||||
char rank;
|
||||
float rating_ordinal;
|
||||
unsigned int rating_update_count;
|
||||
float rating_change;
|
||||
int rank_change;
|
||||
} RustRankInfo;
|
||||
|
||||
/**
|
||||
* An intermediary type for moving `UserInfo` across the FFI boundary.
|
||||
*
|
||||
@@ -257,6 +269,16 @@ void slprs_logging_update_container(const char *kind, bool enabled, int level);
|
||||
*/
|
||||
void slprs_mainline_logging_update_log_level(int level);
|
||||
|
||||
/**
|
||||
* Fetches the result of a recently played match via its ID.
|
||||
*/
|
||||
void slprs_fetch_match_result(uintptr_t exi_device_instance_ptr, const char *match_id);
|
||||
|
||||
/**
|
||||
* Gets the most recently fetched rank information of the user currently logged in.
|
||||
*/
|
||||
struct RustRankInfo slprs_get_rank_info(uintptr_t exi_device_instance_ptr);
|
||||
|
||||
/**
|
||||
* Instructs the `UserManager` on the EXI Device at the provided pointer to attempt
|
||||
* authentication. This runs synchronously on whatever thread it's called on.
|
||||
|
@@ -13,6 +13,7 @@ pub mod exi;
|
||||
pub mod game_reporter;
|
||||
pub mod jukebox;
|
||||
pub mod logger;
|
||||
pub mod rank_info;
|
||||
pub mod user;
|
||||
|
||||
/// A small helper method for moving in and out of our known types.
|
||||
|
44
ffi/src/rank_info.rs
Normal file
44
ffi/src/rank_info.rs
Normal file
@@ -0,0 +1,44 @@
|
||||
use std::ffi::{c_char, c_float, c_int, c_uint};
|
||||
|
||||
use slippi_exi_device::SlippiEXIDevice;
|
||||
|
||||
use crate::c_str_to_string;
|
||||
use crate::with_returning;
|
||||
|
||||
/// Rank info that we vend back to the Dolphin side of things.
|
||||
#[repr(C)]
|
||||
pub struct RustRankInfo {
|
||||
pub fetch_status: c_int,
|
||||
pub rank: c_char,
|
||||
pub rating_ordinal: c_float,
|
||||
pub rating_update_count: c_uint,
|
||||
pub rating_change: c_float,
|
||||
pub rank_change: c_int,
|
||||
}
|
||||
|
||||
/// Fetches the result of a recently played match via its ID.
|
||||
#[unsafe(no_mangle)]
|
||||
pub extern "C" fn slprs_fetch_match_result(exi_device_instance_ptr: usize, match_id: *const c_char) {
|
||||
with_returning::<SlippiEXIDevice, _, _>(exi_device_instance_ptr, |device| {
|
||||
let fn_name = "slprs_fetch_match_result";
|
||||
let match_id = c_str_to_string(match_id, fn_name, "match_id");
|
||||
device.user_manager.fetch_match_result(match_id);
|
||||
})
|
||||
}
|
||||
|
||||
/// Gets the most recently fetched rank information of the user currently logged in.
|
||||
#[unsafe(no_mangle)]
|
||||
pub extern "C" fn slprs_get_rank_info(exi_device_instance_ptr: usize) -> RustRankInfo {
|
||||
with_returning::<SlippiEXIDevice, _, _>(exi_device_instance_ptr, |device| {
|
||||
let (rank, fetch_status) = device.user_manager.current_rank_and_status();
|
||||
|
||||
RustRankInfo {
|
||||
fetch_status: fetch_status as c_int,
|
||||
rank: rank.rank as c_char,
|
||||
rating_ordinal: rank.rating_ordinal as c_float,
|
||||
rating_update_count: rank.rating_update_count as c_uint,
|
||||
rating_change: rank.rating_change as c_float,
|
||||
rank_change: rank.rank_change as c_int,
|
||||
}
|
||||
})
|
||||
}
|
@@ -9,16 +9,14 @@ use std::time::Duration;
|
||||
|
||||
use flate2::Compression;
|
||||
use flate2::write::GzEncoder;
|
||||
use serde_json::{Value, json};
|
||||
use serde_json::json;
|
||||
|
||||
use dolphin_integrations::{Color, Dolphin, Duration as OSDDuration, Log};
|
||||
use slippi_gg_api::APIClient;
|
||||
use slippi_gg_api::{APIClient, GraphQLError};
|
||||
|
||||
use crate::types::{GameReport, GameReportRequestPayload, OnlinePlayMode};
|
||||
use crate::{ProcessingEvent, StatusReportEvent};
|
||||
|
||||
const GRAPHQL_URL: &str = "https://internal.slippi.gg/graphql";
|
||||
|
||||
/// How many times a report should attempt to send.
|
||||
const MAX_REPORT_ATTEMPTS: i32 = 5;
|
||||
|
||||
@@ -116,19 +114,22 @@ pub fn report_match_status(api_client: &APIClient, uid: String, match_id: String
|
||||
}
|
||||
"#;
|
||||
|
||||
let variables = Some(json!({
|
||||
let variables = json!({
|
||||
"report": {
|
||||
"matchId": match_id,
|
||||
"fbUid": uid,
|
||||
"playKey": play_key,
|
||||
"status": status,
|
||||
}
|
||||
}));
|
||||
});
|
||||
|
||||
let res = execute_graphql_query(api_client, mutation, variables, Some("reportOnlineMatchStatus"));
|
||||
|
||||
match res {
|
||||
Ok(value) if value == "true" => {
|
||||
match api_client
|
||||
.graphql(mutation)
|
||||
.variables(variables)
|
||||
.data_field("/data/reportOnlineMatchStatus")
|
||||
.send::<bool>()
|
||||
{
|
||||
Ok(value) if value => {
|
||||
tracing::info!(target: Log::SlippiOnline, "Successfully executed status report request: {status}")
|
||||
},
|
||||
Ok(value) => tracing::error!(target: Log::SlippiOnline, ?value, "Error executing status report request: {status}"),
|
||||
@@ -239,13 +240,10 @@ fn process_reports(queue: &GameReporterQueue, event: ProcessingEvent) {
|
||||
#[derive(Debug)]
|
||||
enum ReportSendErrorKind {
|
||||
#[allow(dead_code)]
|
||||
Net(slippi_gg_api::Error),
|
||||
GraphQL(GraphQLError),
|
||||
|
||||
#[allow(dead_code)]
|
||||
JSON(serde_json::Error),
|
||||
#[allow(dead_code)]
|
||||
GraphQL(String),
|
||||
#[allow(dead_code)]
|
||||
NotSuccessful(String),
|
||||
NotSuccessful,
|
||||
}
|
||||
|
||||
/// Wraps errors that can occur during report sending.
|
||||
@@ -294,86 +292,32 @@ fn try_send_next_report(
|
||||
}
|
||||
"#;
|
||||
|
||||
let variables = Some(json!({
|
||||
let variables = json!({
|
||||
"report": payload,
|
||||
}));
|
||||
});
|
||||
|
||||
// Call execute_graphql_query and get the response body as a String.
|
||||
let response_body =
|
||||
execute_graphql_query(api_client, mutation, variables, Some("reportOnlineGame")).map_err(|e| ReportSendError {
|
||||
let response: ReportResponse = api_client
|
||||
.graphql(mutation)
|
||||
.variables(variables)
|
||||
.data_field("/data/reportOnlineGame")
|
||||
.send()
|
||||
.map_err(|error| ReportSendError {
|
||||
is_last_attempt,
|
||||
sleep_ms: error_sleep_ms,
|
||||
kind: e,
|
||||
kind: ReportSendErrorKind::GraphQL(error),
|
||||
})?;
|
||||
|
||||
// Now, parse the response JSON to get the data you need.
|
||||
let response: ReportResponse = serde_json::from_str(&response_body).map_err(|e| ReportSendError {
|
||||
is_last_attempt,
|
||||
sleep_ms: error_sleep_ms,
|
||||
kind: ReportSendErrorKind::JSON(e),
|
||||
})?;
|
||||
|
||||
if !response.success {
|
||||
return Err(ReportSendError {
|
||||
is_last_attempt,
|
||||
sleep_ms: error_sleep_ms,
|
||||
kind: ReportSendErrorKind::NotSuccessful(response_body),
|
||||
kind: ReportSendErrorKind::NotSuccessful,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(response.upload_url)
|
||||
}
|
||||
|
||||
/// Prepares and executes a GraphQL query.
|
||||
fn execute_graphql_query(
|
||||
api_client: &APIClient,
|
||||
query: &str,
|
||||
variables: Option<Value>,
|
||||
field: Option<&str>,
|
||||
) -> Result<String, ReportSendErrorKind> {
|
||||
// Prepare the GraphQL request payload
|
||||
let request_body = match variables {
|
||||
Some(vars) => json!({
|
||||
"query": query,
|
||||
"variables": vars,
|
||||
}),
|
||||
None => json!({
|
||||
"query": query,
|
||||
}),
|
||||
};
|
||||
|
||||
// Make the GraphQL request
|
||||
let response = api_client
|
||||
.post(GRAPHQL_URL)
|
||||
.send_json(&request_body)
|
||||
.map_err(ReportSendErrorKind::Net)?;
|
||||
|
||||
// Parse the response JSON
|
||||
let response_json: Value =
|
||||
serde_json::from_str(&response.into_string().unwrap_or_default()).map_err(ReportSendErrorKind::JSON)?;
|
||||
|
||||
// Check for GraphQL errors
|
||||
if let Some(errors) = response_json.get("errors") {
|
||||
if errors.is_array() && !errors.as_array().unwrap().is_empty() {
|
||||
let error_message = serde_json::to_string_pretty(errors).unwrap();
|
||||
return Err(ReportSendErrorKind::GraphQL(error_message));
|
||||
}
|
||||
}
|
||||
|
||||
// Return the data response
|
||||
if let Some(data) = response_json.get("data") {
|
||||
let result = match field {
|
||||
Some(field) => data.get(field).unwrap_or(data),
|
||||
None => data,
|
||||
};
|
||||
Ok(result.to_string())
|
||||
} else {
|
||||
Err(ReportSendErrorKind::GraphQL(
|
||||
"No 'data' field in the GraphQL response.".to_string(),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Gzip compresses `input` data to `output` data.
|
||||
fn compress_to_gzip(input: &[u8], output: &mut [u8]) -> Result<usize, std::io::Error> {
|
||||
let mut encoder = GzEncoder::new(output, Compression::default());
|
||||
|
@@ -19,5 +19,5 @@ mainline = []
|
||||
dolphin-integrations = { path = "../dolphin" }
|
||||
hps_decode = { version = "0.2.1", features = ["rodio-source"] }
|
||||
rodio = { version = "0.17.1", default-features = false }
|
||||
thiserror = "1.0.44"
|
||||
thiserror = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
|
@@ -11,8 +11,13 @@ edition = "2024"
|
||||
publish = false
|
||||
|
||||
[dependencies]
|
||||
ureq = { workspace = true }
|
||||
serde = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
ureq = { workspace = true }
|
||||
|
||||
dolphin-integrations = { path = "../dolphin" }
|
||||
|
||||
[features]
|
||||
default = ["ishiiruka"]
|
||||
|
170
slippi-gg-api/src/graphql.rs
Normal file
170
slippi-gg-api/src/graphql.rs
Normal file
@@ -0,0 +1,170 @@
|
||||
use std::borrow::Cow;
|
||||
use std::collections::HashMap;
|
||||
use std::time::Duration;
|
||||
|
||||
use serde_json::Value;
|
||||
use thiserror::Error;
|
||||
|
||||
use dolphin_integrations::Log;
|
||||
|
||||
use super::APIClient;
|
||||
|
||||
/// Various errors that can happen during a GraphQL request.
|
||||
#[derive(Debug, Error)]
|
||||
pub enum GraphQLError {
|
||||
#[error("Expected {0} data key, but returned payload has none.")]
|
||||
MissingResponseField(String),
|
||||
|
||||
#[error("Expected data key, but returned payload has none.")]
|
||||
MissingResponseData,
|
||||
|
||||
#[error(transparent)]
|
||||
Request(ureq::Error),
|
||||
|
||||
#[error(transparent)]
|
||||
IO(std::io::Error),
|
||||
|
||||
#[error(transparent)]
|
||||
InvalidResponseType(serde_json::Error),
|
||||
|
||||
#[error(transparent)]
|
||||
InvalidResponseJSON(serde_json::Error),
|
||||
|
||||
#[error("GraphQL call returned errors: {0}")]
|
||||
Server(String),
|
||||
}
|
||||
|
||||
/// A builder pattern that makes constructing and parsing GraphQL
|
||||
/// responses simpler.
|
||||
///
|
||||
/// You generally shouldn't create this type yourself; call `.graphql()`
|
||||
/// on an `APIClient` instance to receive one for use.
|
||||
#[derive(Debug)]
|
||||
pub struct GraphQLBuilder {
|
||||
client: APIClient,
|
||||
endpoint: Cow<'static, str>,
|
||||
response_field: Option<Cow<'static, str>>,
|
||||
body: HashMap<&'static str, Value>,
|
||||
request_timeout: Duration,
|
||||
}
|
||||
|
||||
impl GraphQLBuilder {
|
||||
/// Creates and returns a new GraphQLBuilder type.
|
||||
pub fn new(client: APIClient, query: String) -> Self {
|
||||
let mut body = HashMap::new();
|
||||
body.insert("query", Value::String(query));
|
||||
|
||||
Self {
|
||||
client,
|
||||
endpoint: Cow::Borrowed("https://internal.slippi.gg/graphql"),
|
||||
response_field: None,
|
||||
body,
|
||||
request_timeout: super::default_timeout(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets a custom timeout on this call. If not set, then this will
|
||||
/// default to whatever the `APIClient` currently has set.
|
||||
pub fn timeout(mut self, timeout: Duration) -> Self {
|
||||
self.request_timeout = timeout;
|
||||
self
|
||||
}
|
||||
|
||||
/// Sets optional `variables` for the GraphQL payload.
|
||||
///
|
||||
/// In the future, this might be widened to accept any type
|
||||
/// that implements `serde::Serialize`. At the moment all our
|
||||
/// calls work on built `Value` types using the `json!()` macro
|
||||
/// anyway so there's no need to complicate the builder chain with it.
|
||||
pub fn variables(mut self, variables: Value) -> Self {
|
||||
self.body.insert("variables", variables);
|
||||
self
|
||||
}
|
||||
|
||||
/// Sets an optional key that the response handler should use as its
|
||||
/// return type. If this is not configured, the response handler will
|
||||
/// use the entire `data` payload for deserialization.
|
||||
pub fn data_field<Pointer>(mut self, pointer: Pointer) -> Self
|
||||
where
|
||||
Pointer: Into<Cow<'static, str>>,
|
||||
{
|
||||
self.response_field = Some(pointer.into());
|
||||
self
|
||||
}
|
||||
|
||||
/// Consumes and sends the request, deserializing the response and yielding
|
||||
/// any errors in the process.
|
||||
pub fn send<T>(self) -> Result<T, GraphQLError>
|
||||
where
|
||||
T: serde::de::DeserializeOwned,
|
||||
{
|
||||
let response = self
|
||||
.client
|
||||
.post(self.endpoint.as_ref())
|
||||
.timeout(self.request_timeout)
|
||||
.send_json(&self.body)
|
||||
.map_err(GraphQLError::Request)?
|
||||
.into_string()
|
||||
.map_err(GraphQLError::IO)?;
|
||||
|
||||
parse(&self, &response).inspect_err(|error| match error {
|
||||
// This is a fully parsed error from the server, so we don't
|
||||
// need to keep the response body around for debugging.
|
||||
GraphQLError::Server(_) => {},
|
||||
|
||||
// For non-parsable error situations, we want to go ahead and
|
||||
// dump the response body to make debugging easier.
|
||||
_ => {
|
||||
tracing::error!(
|
||||
target: Log::SlippiOnline,
|
||||
"GraphQL response body: {}",
|
||||
response
|
||||
);
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Attempts to parse a returned response body.
|
||||
///
|
||||
/// This is mostly separated to provide a more concise `GraphQLBuilder::send`
|
||||
/// method with regards to some specific logging we want to do.
|
||||
fn parse<T>(request: &GraphQLBuilder, response_body: &str) -> Result<T, GraphQLError>
|
||||
where
|
||||
T: serde::de::DeserializeOwned,
|
||||
{
|
||||
// We always go through `Value` first in order to check any
|
||||
// potential errors and remove anything the caller doesn't need.
|
||||
let mut response: Value = serde_json::from_str(response_body).map_err(|error| {
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Failed to deserialize GraphQL response");
|
||||
GraphQLError::InvalidResponseType(error)
|
||||
})?;
|
||||
|
||||
// Errors will always be in the `errors` slot, so check that first.
|
||||
if let Some(errors) = response.get("errors") {
|
||||
if errors.is_array() && !errors.as_array().unwrap().is_empty() {
|
||||
// In the event that pretty printing somehow fails, just fall back
|
||||
// to the `Value` debug impl. It'll communicate well enough what
|
||||
// happened and is a rare edge case anyway.
|
||||
let messages = serde_json::to_string_pretty(&errors).map_err(|error| {
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Failed to pretty-format error string");
|
||||
GraphQLError::Server(format!("{:?}", errors))
|
||||
})?;
|
||||
|
||||
return Err(GraphQLError::Server(messages));
|
||||
}
|
||||
}
|
||||
|
||||
// Now attempt to extract the response payload. If we have it, then we'll attempt
|
||||
// to deserialize it to the expected response type.
|
||||
let data = if let Some(path) = &request.response_field {
|
||||
response
|
||||
.pointer_mut(path.as_ref())
|
||||
.ok_or_else(|| GraphQLError::MissingResponseField(path.to_string()))?
|
||||
.take()
|
||||
} else {
|
||||
response.get_mut("data").ok_or(GraphQLError::MissingResponseData)?.take()
|
||||
};
|
||||
|
||||
serde_json::from_value(data).map_err(GraphQLError::InvalidResponseJSON)
|
||||
}
|
@@ -5,6 +5,9 @@ use std::time::Duration;
|
||||
|
||||
use ureq::{Agent, AgentBuilder, Resolver};
|
||||
|
||||
mod graphql;
|
||||
pub use graphql::{GraphQLBuilder, GraphQLError};
|
||||
|
||||
/// Re-export `ureq::Error` for simplicity.
|
||||
pub type Error = ureq::Error;
|
||||
|
||||
@@ -26,6 +29,12 @@ impl Resolver for Ipv4Resolver {
|
||||
}
|
||||
}
|
||||
|
||||
/// Default timeout that we use on client types. Extracted
|
||||
/// so that the GraphQLBuilder can also call it.
|
||||
pub(crate) fn default_timeout() -> Duration {
|
||||
Duration::from_millis(5000)
|
||||
}
|
||||
|
||||
/// A wrapper type that simply dereferences to a `ureq::Agent`.
|
||||
///
|
||||
/// It's extracted purely for ease of debugging, and for segmenting
|
||||
@@ -61,12 +70,20 @@ impl APIClient {
|
||||
let http_client = AgentBuilder::new()
|
||||
.resolver(Ipv4Resolver)
|
||||
.max_idle_connections(5)
|
||||
.timeout(Duration::from_millis(5000))
|
||||
.timeout(default_timeout())
|
||||
.user_agent(&format!("SlippiDolphin/{} ({}) (Rust)", _build, slippi_semver))
|
||||
.build();
|
||||
|
||||
Self(http_client)
|
||||
}
|
||||
|
||||
/// Returns a type that can be used to construct GraphQL requests.
|
||||
pub fn graphql<Query>(&self, query: Query) -> GraphQLBuilder
|
||||
where
|
||||
Query: Into<String>,
|
||||
{
|
||||
GraphQLBuilder::new(self.clone(), query.into())
|
||||
}
|
||||
}
|
||||
|
||||
impl Deref for APIClient {
|
||||
|
@@ -17,9 +17,12 @@ playback = []
|
||||
|
||||
[dependencies]
|
||||
dolphin-integrations = { path = "../dolphin" }
|
||||
slippi-gg-api = { path = "../slippi-gg-api" }
|
||||
|
||||
open = "5"
|
||||
serde = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
slippi-gg-api = { path = "../slippi-gg-api" }
|
||||
thiserror = { workspace = true }
|
||||
time = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
ureq = { workspace = true }
|
||||
|
@@ -64,12 +64,12 @@ impl DirectCodes {
|
||||
},
|
||||
|
||||
Err(error) => {
|
||||
tracing::error!(?error, "Unable to parse direct codes file");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Unable to parse direct codes file");
|
||||
},
|
||||
},
|
||||
|
||||
Err(error) => {
|
||||
tracing::error!(?error, "Unable to read direct codes file");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Unable to read direct codes file");
|
||||
},
|
||||
}
|
||||
|
||||
|
213
user/src/lib.rs
213
user/src/lib.rs
@@ -1,9 +1,10 @@
|
||||
//! This module contains data models and helper methods for handling user authentication
|
||||
//! from within Slippi Dolphin.
|
||||
//! This module contains data models and helper methods for handling user authentication and
|
||||
//! interaction from within Slippi Dolphin.
|
||||
|
||||
use std::path::PathBuf;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use dolphin_integrations::Log;
|
||||
use slippi_gg_api::APIClient;
|
||||
|
||||
mod chat;
|
||||
@@ -12,6 +13,10 @@ pub use chat::DEFAULT_CHAT_MESSAGES;
|
||||
mod direct_codes;
|
||||
use direct_codes::DirectCodes;
|
||||
|
||||
mod rank_fetcher;
|
||||
pub use rank_fetcher::RankFetchStatus;
|
||||
use rank_fetcher::{RankFetcher, RankFetcherStatus, SlippiRank};
|
||||
|
||||
mod watcher;
|
||||
use watcher::UserInfoWatcher;
|
||||
|
||||
@@ -50,6 +55,18 @@ impl UserInfo {
|
||||
}
|
||||
}
|
||||
|
||||
/// Represents a slice of rank information from the Slippi server.
|
||||
#[derive(Clone, Copy, Debug, Default)]
|
||||
pub struct RankInfo {
|
||||
pub rank: i8,
|
||||
pub rating_ordinal: f32,
|
||||
pub global_placing: u16,
|
||||
pub regional_placing: u16,
|
||||
pub rating_update_count: u32,
|
||||
pub rating_change: f32,
|
||||
pub rank_change: i8,
|
||||
}
|
||||
|
||||
/// A thread-safe handle for the User Manager. This uses an `Arc` under the hood, so you don't
|
||||
/// need to do so if you're storing it.
|
||||
///
|
||||
@@ -60,11 +77,13 @@ impl UserInfo {
|
||||
pub struct UserManager {
|
||||
api_client: APIClient,
|
||||
user: Arc<Mutex<UserInfo>>,
|
||||
rank: Arc<Mutex<RankInfo>>,
|
||||
user_json_path: Arc<PathBuf>,
|
||||
pub direct_codes: DirectCodes,
|
||||
pub teams_direct_codes: DirectCodes,
|
||||
slippi_semver: String,
|
||||
watcher: Arc<Mutex<UserInfoWatcher>>,
|
||||
rank_fetcher: RankFetcher,
|
||||
}
|
||||
|
||||
impl UserManager {
|
||||
@@ -74,6 +93,7 @@ impl UserManager {
|
||||
/// live. This is an OS-specific value and we currently need to share it with Dolphin,
|
||||
/// so this should be passed via the FFI layer. In the future, we may be able to remove
|
||||
/// this restriction via some assumptions.
|
||||
///
|
||||
// @TODO: The semver param here should get refactored away in time once we've ironed out
|
||||
// how some things get persisted from the Dolphin side. Not a big deal to thread it for now.
|
||||
pub fn new(api_client: APIClient, mut user_config_folder: PathBuf, slippi_semver: String) -> Self {
|
||||
@@ -95,16 +115,21 @@ impl UserManager {
|
||||
});
|
||||
|
||||
let user = Arc::new(Mutex::new(UserInfo::default()));
|
||||
let rank = Arc::new(Mutex::new(RankInfo::default()));
|
||||
|
||||
let watcher = Arc::new(Mutex::new(UserInfoWatcher::new()));
|
||||
let rank_fetcher = RankFetcher::new();
|
||||
|
||||
Self {
|
||||
api_client,
|
||||
user,
|
||||
rank,
|
||||
user_json_path,
|
||||
direct_codes,
|
||||
teams_direct_codes,
|
||||
slippi_semver,
|
||||
watcher,
|
||||
rank_fetcher,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -158,7 +183,14 @@ impl UserManager {
|
||||
/// Runs the `attempt_login` function on the calling thread. If you need this to run in the
|
||||
/// background, you want `watch_for_login` instead.
|
||||
pub fn attempt_login(&self) -> bool {
|
||||
attempt_login(&self.api_client, &self.user, &self.user_json_path, &self.slippi_semver)
|
||||
attempt_login(
|
||||
&self.api_client,
|
||||
&self.user,
|
||||
&self.rank,
|
||||
&self.rank_fetcher.status,
|
||||
&self.user_json_path,
|
||||
&self.slippi_semver,
|
||||
)
|
||||
}
|
||||
|
||||
/// Kicks off a background handler for processing user authentication.
|
||||
@@ -166,9 +198,11 @@ impl UserManager {
|
||||
let mut watcher = self.watcher.lock().expect("Unable to acquire user watcher lock");
|
||||
|
||||
watcher.watch_for_login(
|
||||
self.api_client.clone(),
|
||||
self.user_json_path.clone(),
|
||||
self.user.clone(),
|
||||
&self.api_client,
|
||||
&self.user_json_path,
|
||||
&self.user,
|
||||
&self.rank,
|
||||
&self.rank_fetcher.status,
|
||||
&self.slippi_semver,
|
||||
);
|
||||
}
|
||||
@@ -181,15 +215,15 @@ impl UserManager {
|
||||
if let Some(path) = path_ref.to_str() {
|
||||
let url = format!("https://slippi.gg/online/enable?path={path}");
|
||||
|
||||
tracing::info!("[User] Login at path: {}", url);
|
||||
tracing::info!(target: Log::SlippiOnline, "[User] Login at path: {}", url);
|
||||
|
||||
if let Err(error) = open::that_detached(&url) {
|
||||
tracing::error!(?error, ?url, "Failed to open login page");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, ?url, "Failed to open login page");
|
||||
}
|
||||
} else {
|
||||
// This should never really happen, but it's conceivable that some odd unicode path
|
||||
// errors could happen... so just dump a log I guess.
|
||||
tracing::warn!(?path_ref, "Unable to convert user.json path to UTF-8 string");
|
||||
tracing::warn!(target: Log::SlippiOnline, ?path_ref, "Unable to convert user.json path to UTF-8 string");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -197,7 +231,7 @@ impl UserManager {
|
||||
/// by, but still used.
|
||||
pub fn update_app(&self) -> bool {
|
||||
if let Err(error) = open::that_detached("https://slippi.gg/downloads?update=true") {
|
||||
tracing::error!(?error, "Failed to open update URL");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Failed to open update URL");
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -218,12 +252,33 @@ impl UserManager {
|
||||
});
|
||||
}
|
||||
|
||||
/// Gets the current rank state (even if blank), along with the current status of
|
||||
/// any ongoing fetch operations.
|
||||
pub fn current_rank_and_status(&self) -> (RankInfo, RankFetchStatus) {
|
||||
let data = self.rank.lock().unwrap();
|
||||
let status = self.rank_fetcher.status.get();
|
||||
|
||||
(*data, status)
|
||||
}
|
||||
|
||||
/// Instructs the rank manager to check for any rank updates.
|
||||
pub fn fetch_match_result(&self, match_id: String) {
|
||||
let client = self.api_client.clone();
|
||||
let (uid, play_key) = self.get(|user| (user.uid.clone(), user.play_key.clone()));
|
||||
let rank = self.rank.clone();
|
||||
|
||||
self.rank_fetcher.fetch_match_result(client, match_id, uid, play_key, rank);
|
||||
}
|
||||
|
||||
/// Logs the current user out and removes their `user.json` from the filesystem.
|
||||
pub fn logout(&mut self) {
|
||||
// Reset rank state values to defaults
|
||||
self.rank = Arc::new(Mutex::new(RankInfo::default()));
|
||||
self.rank_fetcher.status.set(RankFetchStatus::Error);
|
||||
self.set(|user| *user = UserInfo::default());
|
||||
|
||||
if let Err(error) = std::fs::remove_file(self.user_json_path.as_path()) {
|
||||
tracing::error!(?error, "Failed to remove user.json on logout");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Failed to remove user.json on logout");
|
||||
}
|
||||
|
||||
let mut watcher = self.watcher.lock().expect("Unable to acquire watcher lock on user logout");
|
||||
@@ -235,7 +290,16 @@ impl UserManager {
|
||||
/// Checks for the existence of a `user.json` file and, if found, attempts to load and parse it.
|
||||
///
|
||||
/// This returns a `bool` value so that the background thread can know whether to stop checking.
|
||||
fn attempt_login(api_client: &APIClient, user: &Arc<Mutex<UserInfo>>, user_json_path: &PathBuf, slippi_semver: &str) -> bool {
|
||||
fn attempt_login(
|
||||
api_client: &APIClient,
|
||||
user: &Mutex<UserInfo>,
|
||||
rank: &Mutex<RankInfo>,
|
||||
rank_fetcher_status: &RankFetcherStatus,
|
||||
user_json_path: &PathBuf,
|
||||
slippi_semver: &str,
|
||||
) -> bool {
|
||||
let mut parse_successful = false;
|
||||
|
||||
match std::fs::read_to_string(user_json_path) {
|
||||
Ok(contents) => match serde_json::from_str::<UserInfo>(&contents) {
|
||||
Ok(mut info) => {
|
||||
@@ -244,18 +308,31 @@ fn attempt_login(api_client: &APIClient, user: &Arc<Mutex<UserInfo>>, user_json_
|
||||
let uid = info.uid.clone();
|
||||
{
|
||||
let mut lock = user.lock().expect("Unable to lock user in attempt_login");
|
||||
|
||||
*lock = info;
|
||||
}
|
||||
|
||||
overwrite_from_server(api_client, user, uid, slippi_semver);
|
||||
return true;
|
||||
parse_successful = true; // Will cause fn to return true
|
||||
|
||||
// Indicate rank is being fetched
|
||||
rank_fetcher_status.set(RankFetchStatus::Fetching);
|
||||
|
||||
let api_res = overwrite_from_server(api_client, user, rank, uid, slippi_semver);
|
||||
|
||||
// Set ranked status to fetched if success, error if not
|
||||
rank_fetcher_status.set(if api_res.is_ok() {
|
||||
RankFetchStatus::Fetched
|
||||
} else {
|
||||
RankFetchStatus::Error
|
||||
});
|
||||
|
||||
if let Err(error) = &api_res {
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Failed to fetch user info from server");
|
||||
}
|
||||
},
|
||||
|
||||
// JSON parsing error
|
||||
Err(error) => {
|
||||
tracing::error!(?error, "Unable to parse user.json");
|
||||
return false;
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Unable to parse user.json");
|
||||
},
|
||||
},
|
||||
|
||||
@@ -263,12 +340,12 @@ fn attempt_login(api_client: &APIClient, user: &Arc<Mutex<UserInfo>>, user_json_
|
||||
Err(error) => {
|
||||
// A not-found file just means they haven't logged in yet... presumably.
|
||||
if error.kind() != std::io::ErrorKind::NotFound {
|
||||
tracing::error!(?error, "Unable to read user.json");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "Unable to read user.json");
|
||||
}
|
||||
|
||||
return false;
|
||||
},
|
||||
}
|
||||
|
||||
parse_successful
|
||||
}
|
||||
|
||||
/// The core payload that represents user information. This type is expected to conform
|
||||
@@ -288,51 +365,87 @@ struct UserInfoAPIResponse {
|
||||
|
||||
#[serde(alias = "chatMessages")]
|
||||
pub chat_messages: Vec<String>,
|
||||
|
||||
#[serde(alias = "rank")]
|
||||
pub rank: UserRankInfo,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Deserialize)]
|
||||
pub struct UserRankInfo {
|
||||
#[serde(alias = "ratingOrdinal")]
|
||||
pub rating_ordinal: f32,
|
||||
|
||||
#[serde(alias = "dailyGlobalPlacement")]
|
||||
pub global_placing: Option<u16>,
|
||||
|
||||
#[serde(alias = "dailyRegionalPlacement")]
|
||||
pub regional_placing: Option<u16>,
|
||||
|
||||
#[serde(alias = "ratingUpdateCount")]
|
||||
pub rating_update_count: u32,
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
enum APILoginError {
|
||||
#[error(transparent)]
|
||||
Client(ureq::Error),
|
||||
|
||||
#[error(transparent)]
|
||||
IO(std::io::Error),
|
||||
}
|
||||
|
||||
/// Calls out to the Slippi server and fetches the user info, patching up the user info object
|
||||
/// with any returned information.
|
||||
fn overwrite_from_server(api_client: &APIClient, user: &Arc<Mutex<UserInfo>>, uid: String, slippi_semver: &str) {
|
||||
fn overwrite_from_server(
|
||||
api_client: &APIClient,
|
||||
user: &Mutex<UserInfo>,
|
||||
rank: &Mutex<RankInfo>,
|
||||
uid: String,
|
||||
slippi_semver: &str,
|
||||
) -> Result<(), APILoginError> {
|
||||
let is_beta = match slippi_semver.contains("beta") {
|
||||
true => "-beta",
|
||||
false => "",
|
||||
};
|
||||
|
||||
// @TODO: Switch this to a GraphQL call? Likely a Fizzi/Nikki task.
|
||||
let url = format!("{USER_API_URL}{is_beta}/{uid}?additionalFields=chatMessages");
|
||||
let url = format!("{USER_API_URL}{is_beta}/{uid}?additionalFields=chatMessages,rank");
|
||||
|
||||
tracing::warn!(?url, "Fetching user info");
|
||||
tracing::info!(target: Log::SlippiOnline, "Fetching user info");
|
||||
|
||||
match api_client.get(&url).call() {
|
||||
Ok(response) => match response.into_string() {
|
||||
Ok(body) => match serde_json::from_str::<UserInfoAPIResponse>(&body) {
|
||||
Ok(info) => {
|
||||
let mut lock = user.lock().expect("Unable to lock user in attempt_login");
|
||||
let info: UserInfoAPIResponse = api_client
|
||||
.get(&url)
|
||||
.call()
|
||||
.map_err(APILoginError::Client)?
|
||||
.into_json()
|
||||
.map_err(APILoginError::IO)?;
|
||||
|
||||
lock.uid = info.uid;
|
||||
lock.display_name = info.display_name;
|
||||
lock.connect_code = info.connect_code;
|
||||
lock.latest_version = info.latest_version;
|
||||
lock.chat_messages = Some(info.chat_messages);
|
||||
let mut lock = user.lock().unwrap();
|
||||
lock.uid = info.uid;
|
||||
lock.display_name = info.display_name;
|
||||
lock.connect_code = info.connect_code;
|
||||
lock.latest_version = info.latest_version;
|
||||
lock.chat_messages = Some(info.chat_messages);
|
||||
(*lock).sanitize();
|
||||
|
||||
(*lock).sanitize();
|
||||
},
|
||||
let rank_idx = SlippiRank::decide(
|
||||
info.rank.rating_ordinal,
|
||||
info.rank.global_placing.unwrap_or(0),
|
||||
info.rank.regional_placing.unwrap_or(0),
|
||||
info.rank.rating_update_count,
|
||||
) as i8;
|
||||
|
||||
Err(error) => {
|
||||
tracing::error!(?error, "Unable to deserialize user info API payload");
|
||||
},
|
||||
},
|
||||
let mut lock = rank.lock().unwrap();
|
||||
|
||||
// Failed to read into a string, usually an I/O error.
|
||||
Err(error) => {
|
||||
tracing::error!(?error, "Unable to read user info response body");
|
||||
},
|
||||
},
|
||||
*lock = RankInfo {
|
||||
rank: rank_idx,
|
||||
rating_ordinal: info.rank.rating_ordinal,
|
||||
global_placing: info.rank.global_placing.unwrap_or(0),
|
||||
regional_placing: info.rank.regional_placing.unwrap_or(0),
|
||||
rating_update_count: info.rank.rating_update_count,
|
||||
rating_change: 0.0, // No change on initial load
|
||||
rank_change: 0, // No change on initial load
|
||||
};
|
||||
|
||||
// `error` is an enum, where one branch will contain the status code if relevant.
|
||||
// We log the debug representation to just see it all.
|
||||
Err(error) => {
|
||||
tracing::error!(?error, "API call for user info failed");
|
||||
},
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
101
user/src/rank_fetcher/mod.rs
Normal file
101
user/src/rank_fetcher/mod.rs
Normal file
@@ -0,0 +1,101 @@
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::thread;
|
||||
|
||||
use slippi_gg_api::APIClient;
|
||||
|
||||
use crate::RankInfo;
|
||||
|
||||
mod network;
|
||||
|
||||
mod rank;
|
||||
pub use rank::SlippiRank;
|
||||
|
||||
/// Represents current state of the rank flow.
|
||||
///
|
||||
/// Note that we currently mark this as C-compatible due to FFI usage.
|
||||
#[repr(C)]
|
||||
#[derive(Copy, Clone, Debug)]
|
||||
pub enum RankFetchStatus {
|
||||
Fetching,
|
||||
Fetched,
|
||||
Error,
|
||||
}
|
||||
|
||||
/// A newtype that exists simply to reduce boilerplate. This might also
|
||||
/// get replaced by atomics, but I don't want to block launching.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RankFetcherStatus(Arc<Mutex<RankFetchStatus>>);
|
||||
|
||||
impl RankFetcherStatus {
|
||||
/// Creates and returns a new status.
|
||||
///
|
||||
/// This defaults to `Error`. We attempt to load initial rank data on client
|
||||
/// sign-in, meaning we should (theoretically, at least) always have some
|
||||
/// generic rank data to work with - but we'll set it then to be safe.
|
||||
pub fn new() -> Self {
|
||||
Self(Arc::new(Mutex::new(RankFetchStatus::Error)))
|
||||
}
|
||||
|
||||
/// Sets the underlying status.
|
||||
pub fn set(&self, status: RankFetchStatus) {
|
||||
let mut lock = self.0.lock().unwrap();
|
||||
*lock = status;
|
||||
}
|
||||
|
||||
/// Gets the underlying status.
|
||||
pub fn get(&self) -> RankFetchStatus {
|
||||
let lock = self.0.lock().unwrap();
|
||||
*lock
|
||||
}
|
||||
}
|
||||
|
||||
/// A type that holds and manages background rank update API calls.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RankFetcher {
|
||||
pub status: RankFetcherStatus,
|
||||
request_thread: Arc<Mutex<Option<thread::JoinHandle<()>>>>,
|
||||
}
|
||||
|
||||
impl RankFetcher {
|
||||
/// Creates and returns a new `RankFetcher`.
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
status: RankFetcherStatus::new(),
|
||||
request_thread: Arc::new(Mutex::new(None)),
|
||||
}
|
||||
}
|
||||
|
||||
/// Fetches the match result for a given match ID.
|
||||
///
|
||||
/// This will spin up a background thread to fetch the match result
|
||||
/// and update the rank data accordingly. If a background thread is already
|
||||
/// running, this will not start a new one.
|
||||
pub fn fetch_match_result(
|
||||
&self,
|
||||
api_client: APIClient,
|
||||
match_id: String,
|
||||
uid: String,
|
||||
play_key: String,
|
||||
data: Arc<Mutex<RankInfo>>,
|
||||
) {
|
||||
let mut thread = self.request_thread.lock().unwrap();
|
||||
|
||||
// If a user leaves and re-enters the CSS while a request is ongoing, we
|
||||
// don't want to fire up multiple threads and issue multiple requests: limit
|
||||
// things to one background thread at a time.
|
||||
if thread.is_some() && !thread.as_ref().unwrap().is_finished() {
|
||||
return;
|
||||
}
|
||||
|
||||
let status = self.status.clone();
|
||||
|
||||
let background_thread = thread::Builder::new()
|
||||
.name("RankMatchResultThread".into())
|
||||
.spawn(move || {
|
||||
network::run_match_result(api_client, match_id, uid, play_key, status, data);
|
||||
})
|
||||
.expect("Failed to spawn RankMatchResultThread.");
|
||||
|
||||
*thread = Some(background_thread);
|
||||
}
|
||||
}
|
193
user/src/rank_fetcher/network.rs
Normal file
193
user/src/rank_fetcher/network.rs
Normal file
@@ -0,0 +1,193 @@
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::thread::sleep;
|
||||
use std::time::Duration;
|
||||
|
||||
use serde_json::json;
|
||||
|
||||
use dolphin_integrations::Log;
|
||||
use slippi_gg_api::{APIClient, GraphQLError};
|
||||
|
||||
use super::{RankFetchStatus, RankFetcherStatus, RankInfo, SlippiRank};
|
||||
|
||||
/// The core of the background thread that handles network requests
|
||||
/// for checking player rank updates.
|
||||
pub fn run_match_result(
|
||||
api_client: APIClient,
|
||||
match_id: String,
|
||||
uid: String,
|
||||
play_key: String,
|
||||
status: RankFetcherStatus,
|
||||
data: Arc<Mutex<RankInfo>>,
|
||||
) {
|
||||
let mut retry_index = 0;
|
||||
|
||||
status.set(RankFetchStatus::Fetching);
|
||||
|
||||
loop {
|
||||
match fetch_match_result(&api_client, &match_id, &uid, &play_key) {
|
||||
Ok(response) => {
|
||||
// If the match hasn't been processed yet, wait and retry
|
||||
if response.status == MatchStatus::Assigned {
|
||||
retry_index += 1;
|
||||
if retry_index < 3 {
|
||||
sleep(Duration::from_secs(2));
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
update_rank(&data, response);
|
||||
status.set(RankFetchStatus::Fetched);
|
||||
break;
|
||||
},
|
||||
|
||||
Err(error) => {
|
||||
tracing::error!(
|
||||
target: Log::SlippiOnline,
|
||||
?error,
|
||||
"Failed to fetch match result"
|
||||
);
|
||||
|
||||
retry_index += 1;
|
||||
|
||||
// Only set the error flag after multiple retries have failed(?)
|
||||
if retry_index >= 3 {
|
||||
status.set(RankFetchStatus::Error);
|
||||
break;
|
||||
}
|
||||
|
||||
let duration = Duration::from_secs(1);
|
||||
sleep(duration);
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, Default, serde::Deserialize)]
|
||||
struct MatchResultParticipant {
|
||||
#[serde(alias = "ordinal")]
|
||||
pub pre_match_ordinal: Option<f32>,
|
||||
|
||||
#[serde(alias = "dailyGlobalPlacement")]
|
||||
pub pre_match_daily_global_placement: Option<u16>,
|
||||
|
||||
#[serde(alias = "dailyRegionalPlacement")]
|
||||
pub pre_match_daily_regional_placement: Option<u16>,
|
||||
|
||||
#[serde(alias = "ratingUpdateCount")]
|
||||
pub pre_match_rating_update_count: Option<u32>,
|
||||
|
||||
#[serde(alias = "ratingChange")]
|
||||
pub post_match_rating_change: Option<f32>, // Null until the match is processed
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, serde::Deserialize)]
|
||||
#[serde(rename_all = "UPPERCASE")]
|
||||
pub enum MatchStatus {
|
||||
Assigned,
|
||||
Complete,
|
||||
Abandoned,
|
||||
Orphaned,
|
||||
Terminated,
|
||||
Error,
|
||||
Unhandled,
|
||||
}
|
||||
|
||||
impl Default for MatchStatus {
|
||||
fn default() -> Self {
|
||||
MatchStatus::Error // Default to error in case it's missing
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, Default, serde::Deserialize)]
|
||||
struct MatchResultAPIResponse {
|
||||
#[serde(alias = "status")]
|
||||
pub status: MatchStatus,
|
||||
|
||||
// Include the participant
|
||||
#[serde(alias = "participant")]
|
||||
pub participant: MatchResultParticipant,
|
||||
}
|
||||
|
||||
fn fetch_match_result(
|
||||
api_client: &APIClient,
|
||||
match_id: &str,
|
||||
uid: &str,
|
||||
play_key: &str,
|
||||
) -> Result<MatchResultAPIResponse, GraphQLError> {
|
||||
let query = r#"
|
||||
query ($request: OnlineMatchRequestInput!) {
|
||||
getRankedMatchPersonalResult(request: $request) {
|
||||
status
|
||||
participant {
|
||||
ordinal
|
||||
dailyGlobalPlacement
|
||||
dailyRegionalPlacement
|
||||
ratingUpdateCount
|
||||
ratingChange
|
||||
}
|
||||
}
|
||||
}
|
||||
"#;
|
||||
|
||||
let variables = json!({
|
||||
"request": {
|
||||
"matchId": match_id,
|
||||
"fbUid": uid,
|
||||
"playKey": play_key,
|
||||
}
|
||||
});
|
||||
|
||||
let response = api_client
|
||||
.graphql(query)
|
||||
.variables(variables)
|
||||
.data_field("/data/getRankedMatchPersonalResult")
|
||||
.send()?;
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
/// Updates the previous and current rank data based on the match result response.
|
||||
fn update_rank(rank_data: &Mutex<RankInfo>, response: MatchResultAPIResponse) {
|
||||
// Grab the pre-match data and put it in previous.
|
||||
// It's possible that the previous will no longer match the prior previous rank
|
||||
// that was displayed, but I think that's okay because that would only happen
|
||||
// if another match was reported while we were in this one (abandonment) and we
|
||||
// want to correctly show the impact of the last match
|
||||
|
||||
// Start loading in the pre-match values (previous rank)
|
||||
let mut rank_info = RankInfo {
|
||||
rating_ordinal: response.participant.pre_match_ordinal.unwrap_or(0.0),
|
||||
global_placing: response.participant.pre_match_daily_global_placement.unwrap_or(0),
|
||||
regional_placing: response.participant.pre_match_daily_regional_placement.unwrap_or(0),
|
||||
rating_update_count: response.participant.pre_match_rating_update_count.unwrap_or(0),
|
||||
rating_change: response.participant.post_match_rating_change.unwrap_or(0.0),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Determine the old rank based on the data pre-match data
|
||||
let prev_rank_idx = get_rank_idx_from_info(&rank_info);
|
||||
|
||||
// Use rating change to update the rating_ordinal. Assume that the placements havent
|
||||
// changed since they only update once daily anyway. Also assume that update count
|
||||
// has incremented by 1. This could technically be incorrect but it would only matter
|
||||
// during placement matches so probably not a huge deal
|
||||
rank_info.rating_ordinal += rank_info.rating_change;
|
||||
rank_info.rating_update_count += 1;
|
||||
|
||||
// Determine new rank index and rank change
|
||||
rank_info.rank = get_rank_idx_from_info(&rank_info);
|
||||
rank_info.rank_change = rank_info.rank - prev_rank_idx;
|
||||
|
||||
// Load into rank_data
|
||||
let mut rank_data = rank_data.lock().unwrap();
|
||||
*rank_data = rank_info;
|
||||
}
|
||||
|
||||
fn get_rank_idx_from_info(info: &RankInfo) -> i8 {
|
||||
SlippiRank::decide(
|
||||
info.rating_ordinal,
|
||||
info.global_placing,
|
||||
info.regional_placing,
|
||||
info.rating_update_count,
|
||||
) as i8
|
||||
}
|
113
user/src/rank_fetcher/rank.rs
Normal file
113
user/src/rank_fetcher/rank.rs
Normal file
@@ -0,0 +1,113 @@
|
||||
/// Represents a rank in the Slippi playerbase.
|
||||
#[repr(i8)]
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub enum SlippiRank {
|
||||
Unranked,
|
||||
Bronze1,
|
||||
Bronze2,
|
||||
Bronze3,
|
||||
Silver1,
|
||||
Silver2,
|
||||
Silver3,
|
||||
Gold1,
|
||||
Gold2,
|
||||
Gold3,
|
||||
Platinum1,
|
||||
Platinum2,
|
||||
Platinum3,
|
||||
Diamond1,
|
||||
Diamond2,
|
||||
Diamond3,
|
||||
Master1,
|
||||
Master2,
|
||||
Master3,
|
||||
Grandmaster,
|
||||
}
|
||||
|
||||
impl SlippiRank {
|
||||
/// Decides the rank based on the values provided.
|
||||
pub fn decide(rating_ordinal: f32, global_placing: u16, regional_placing: u16, rating_update_count: u32) -> Self {
|
||||
if rating_update_count < 5 {
|
||||
return SlippiRank::Unranked;
|
||||
}
|
||||
|
||||
// TODO: It is technically possible, though unlikely, for rating_ordinal to be negative.
|
||||
// In that case, this function would not show the rank correctly.
|
||||
if rating_ordinal > 0.0 && rating_ordinal <= 765.42 {
|
||||
return SlippiRank::Bronze1;
|
||||
}
|
||||
|
||||
if rating_ordinal > 765.43 && rating_ordinal <= 913.71 {
|
||||
return SlippiRank::Bronze2;
|
||||
}
|
||||
|
||||
if rating_ordinal > 913.72 && rating_ordinal <= 1054.86 {
|
||||
return SlippiRank::Bronze3;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1054.87 && rating_ordinal <= 1188.87 {
|
||||
return SlippiRank::Silver1;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1188.88 && rating_ordinal <= 1315.74 {
|
||||
return SlippiRank::Silver2;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1315.75 && rating_ordinal <= 1435.47 {
|
||||
return SlippiRank::Silver3;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1435.48 && rating_ordinal <= 1548.06 {
|
||||
return SlippiRank::Gold1;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1548.07 && rating_ordinal <= 1653.51 {
|
||||
return SlippiRank::Gold2;
|
||||
}
|
||||
if rating_ordinal > 1653.52 && rating_ordinal <= 1751.82 {
|
||||
return SlippiRank::Gold3;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1751.83 && rating_ordinal <= 1842.99 {
|
||||
return SlippiRank::Platinum1;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1843.0 && rating_ordinal <= 1927.02 {
|
||||
return SlippiRank::Platinum2;
|
||||
}
|
||||
|
||||
if rating_ordinal > 1927.03 && rating_ordinal <= 2003.91 {
|
||||
return SlippiRank::Platinum3;
|
||||
}
|
||||
|
||||
if rating_ordinal > 2003.92 && rating_ordinal <= 2073.66 {
|
||||
return SlippiRank::Diamond1;
|
||||
}
|
||||
|
||||
if rating_ordinal > 2073.67 && rating_ordinal <= 2136.27 {
|
||||
return SlippiRank::Diamond2;
|
||||
}
|
||||
|
||||
if rating_ordinal > 2136.28 && rating_ordinal <= 2191.74 {
|
||||
return SlippiRank::Diamond3;
|
||||
}
|
||||
|
||||
if rating_ordinal >= 2191.75 && global_placing > 0 && regional_placing > 0 {
|
||||
return SlippiRank::Grandmaster;
|
||||
}
|
||||
|
||||
if rating_ordinal > 2191.75 && rating_ordinal <= 2274.99 {
|
||||
return SlippiRank::Master1;
|
||||
}
|
||||
|
||||
if rating_ordinal > 2275.0 && rating_ordinal <= 2350.0 {
|
||||
return SlippiRank::Master2;
|
||||
}
|
||||
|
||||
if rating_ordinal > 2350.0 {
|
||||
return SlippiRank::Master3;
|
||||
}
|
||||
|
||||
SlippiRank::Unranked
|
||||
}
|
||||
}
|
@@ -4,9 +4,10 @@ use std::sync::{Arc, Mutex};
|
||||
use std::thread;
|
||||
use std::time::Duration;
|
||||
|
||||
use dolphin_integrations::Log;
|
||||
use slippi_gg_api::APIClient;
|
||||
|
||||
use super::{UserInfo, attempt_login};
|
||||
use super::{RankFetcherStatus, RankInfo, UserInfo, attempt_login};
|
||||
|
||||
/// This type manages access to user information, as well as any background thread watching
|
||||
/// for `user.json` file existence.
|
||||
@@ -28,9 +29,11 @@ impl UserInfoWatcher {
|
||||
/// Spins up (or re-spins-up) the background watcher thread for the `user.json` file.
|
||||
pub fn watch_for_login(
|
||||
&mut self,
|
||||
api_client: APIClient,
|
||||
user_json_path: Arc<PathBuf>,
|
||||
user: Arc<Mutex<UserInfo>>,
|
||||
api_client: &APIClient,
|
||||
user_json_path: &Arc<PathBuf>,
|
||||
user: &Arc<Mutex<UserInfo>>,
|
||||
rank: &Arc<Mutex<RankInfo>>,
|
||||
rank_fetcher_status: &RankFetcherStatus,
|
||||
slippi_semver: &str,
|
||||
) {
|
||||
// If we're already watching, no-op out.
|
||||
@@ -45,8 +48,13 @@ impl UserInfoWatcher {
|
||||
let should_watch = self.should_watch.clone();
|
||||
should_watch.store(true, Ordering::Relaxed);
|
||||
|
||||
// Create an owned String once we know we're actually launching the thread.
|
||||
let slippi_semver = slippi_semver.to_string();
|
||||
// Create owned types once we know we're actually launching the thread.
|
||||
let semver = slippi_semver.to_string();
|
||||
let client = api_client.clone();
|
||||
let json_path = user_json_path.clone();
|
||||
let user = user.clone();
|
||||
let rank = rank.clone();
|
||||
let rank_fetcher_status = rank_fetcher_status.clone();
|
||||
|
||||
let watcher_thread = thread::Builder::new()
|
||||
.name("SlippiUserJSONWatcherThread".into())
|
||||
@@ -56,7 +64,7 @@ impl UserInfoWatcher {
|
||||
return;
|
||||
}
|
||||
|
||||
if attempt_login(&api_client, &user, &user_json_path, &slippi_semver) {
|
||||
if attempt_login(&client, &user, &rank, &rank_fetcher_status, &json_path, &semver) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -79,7 +87,7 @@ impl UserInfoWatcher {
|
||||
self.should_watch.store(false, Ordering::Relaxed);
|
||||
if let Some(watcher_thread) = self.watcher_thread.take() {
|
||||
if let Err(error) = watcher_thread.join() {
|
||||
tracing::error!(?error, "user.json background thread join failure");
|
||||
tracing::error!(target: Log::SlippiOnline, ?error, "user.json background thread join failure");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
Reference in New Issue
Block a user