Day 5: Print Queue
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
I’ve got a “smart” solution and a really dumb one. I’ll start with the smart one (incomplete but you can infer). I did four different ways to try to get it faster, less memory, etc.
// this is from a nuget package. My Mathy roommate told me this was a topological sort. // It's also my preferred, since it'd perform better on larger data sets. return lines .AsParallel() .Where(line => !IsInOrder(GetSoonestOccurrences(line), aggregateRules)) .Sum(line => line.StableOrderTopologicallyBy( getDependencies: page => aggregateRules.TryGetValue(page, out var mustPreceed) ? mustPreceed.Intersect(line) : Enumerable.Empty<Page>()) .Middle() );
The dumb solution. These comparisons aren’t fully transitive. I can’t believe it works.
public static SortedSet<Page> Sort3(Page[] line, Dictionary<Page, System.Collections.Generic.HashSet<Page>> rules) { // how the hell is this working? var sorted = new SortedSet<Page>(new Sort3Comparer(rules)); foreach (var page in line) sorted.Add(page); return sorted; } public static Page[] OrderBy(Page[] line, Dictionary<Page, System.Collections.Generic.HashSet<Page>> rules) { return line.OrderBy(identity, new Sort3Comparer(rules)).ToArray(); } sealed class Sort3Comparer : IComparer<Page> { private readonly Dictionary<Page, System.Collections.Generic.HashSet<Page>> _rules; public Sort3Comparer(Dictionary<Page, System.Collections.Generic.HashSet<Page>> rules) => _rules = rules; public int Compare(Page x, Page y) { if (_rules.TryGetValue(x, out var xrules)) { if (xrules.Contains(y)) return -1; } if (_rules.TryGetValue(y, out var yrules)) { if (yrules.Contains(x)) return 1; } return 0; } }
Method Mean Error StdDev Gen0 Gen1 Allocated Part2_UsingList (literally just Insert) 660.3 us 12.87 us 23.20 us 187.5000 35.1563 1144.86 KB Part2_TrackLinkedList (wrong now) 1,559.7 us 6.91 us 6.46 us 128.9063 21.4844 795.03 KB Part2_TopologicalSort 732.3 us 13.97 us 16.09 us 285.1563 61.5234 1718.36 KB Part2_SortedSet 309.1 us 4.13 us 3.45 us 54.1992 10.2539 328.97 KB Part2_OrderBy 304.5 us 6.09 us 9.11 us 48.8281 7.8125 301.29 KB Uiua
This is the first one that caused me some headache because I didn’t read the instructions carefully enough.
I kept trying to create a sorted list for when all available pages were used, which got me stuck in an endless loop.Another fun part was figuring out to use
memberof (∈)
instead offind (⌕)
in the last line ofFindNext
. So much time spent on debugging other areas of the codeRun with example input here
FindNext ← ⊙( ⊡1⍉, ⊃▽(▽¬)⊸∈ ⊙⊙(⊡0⍉.) :⊙(⟜(▽¬∈)) ) # find the order of pages for a given set of rules FindOrder ← ( ◴♭. [] ⍢(⊂FindNext|⋅(>1⧻)) ⊙◌⊂ ) PartOne ← ( &rs ∞ &fo "input-5.txt" ∩°□°⊟⊜□¬⌕"\n\n". ⊙(⊜(□⊜⋕≠@,.)≠@\n.↘1) ⊜(⊜⋕≠@|.)≠@\n. ⊙. ¤ ⊞(◡(°□:) ⟜:⊙(°⊟⍉) =2+∩∈ ▽ FindOrder ⊸≍°□: ⊙◌ ) ≡◇(⊡⌊÷2⧻.)▽♭ /+ ) PartTwo ← ( &rs ∞ &fo "input-5.txt" ∩°□°⊟⊜□¬⌕"\n\n". ⊙(⊜(□⊜⋕≠@,.)≠@\n.↘1) ⊜(⊜⋕≠@|.)≠@\n. ⊙. ⍜¤⊞( ◡(°□:) ⟜:⊙(°⊟⍉) =2+∩∈ ▽ FindOrder ⊸≍°□: ⊟∩□ ) ⊙◌ ⊃(⊡0)(⊡1)⍉ ≡◇(⊡⌊÷2⧻.)▽¬≡°□ /+ ) &p "Day 5:" &pf "Part 1: " &p PartOne &pf "Part 2: " &p PartTwo
Uiua
Well it’s still today here, and this is how I spent my evening. It’s not pretty or maybe even good, but it works on the test data…
spoiler
Uses Kahn’s algorithm with simplifying assumptions based on the helpful nature of the data.
Data ← ⊜(□)⊸≠@\n "47|53\n97|13\n97|61\n97|47\n75|29\n61|13\n75|53\n29|13\n97|29\n53|29\n61|53\n97|53\n61|29\n47|13\n75|47\n97|75\n47|61\n75|61\n47|29\n75|13\n53|13\n\n75,47,61,53,29\n97,61,53,29,13\n75,29,13\n75,97,47,61,53\n61,13,29\n97,13,75,29,47" Rs ← ≡◇(⊜⋕⊸≠@|)▽⊸≡◇(⧻⊚⌕@|)Data Ps ← ≡⍚(⊜⋕⊸≠@,)▽⊸≡◇(¬⧻⊚⌕@|)Data NoPred ← ⊢▽:⟜(≡(=0/+⌕)⊙¤)◴♭⟜≡⊣ # Find entry without predecessors. GetLead ← ⊸(▽:⟜(≡(¬/+=))⊙¤)⟜NoPred # Remove that leading entry. Rules ← ⇌⊂⊃(⇌⊢°□⊢|≡°□↘1)[□⍢(GetLead|≠1⧻)] Rs # Repeatedly find rule without predecessors (Kaaaaaahn!). Sorted ← ⊏⍏⊗,Rules IsSorted ← /×>0≡/-◫2⊗°□: Rules MidVal ← ⊡:⟜(⌊÷ 2⧻) ⇌⊕□⊸≡IsSorted Ps # Group by whether the pages are in sort order. ≡◇(/+≡◇(MidVal Sorted)) # Find midpoints and sum.
Oh my. I just watched yernab’s video, and this becomes so much easier:
# Order is totally specified, so sort by number of predecessors, # check to see which were already sorted, then group and sum each group. Data ← ⊜(□⊜□⊸≠@\n)⊸(¬⦷"\n\n")"47|53\n97|13\n97|61\n97|47\n75|29\n61|13\n75|53\n29|13\n97|29\n53|29\n61|53\n97|53\n61|29\n47|13\n75|47\n97|75\n47|61\n75|61\n47|29\n75|13\n53|13\n\n75,47,61,53,29\n97,61,53,29,13\n75,29,13\n75,97,47,61,53\n61,13,29\n97,13,75,29,47" Rs ← ≡◇(⊜⋕⊸≠@|)°□⊢Data Ps ← ≡⍚(⊜⋕⊸≠@,)°□⊣Data ⊕(/+≡◇(⊡⌊÷2⧻.))¬≡≍⟜:≡⍚(⊏⍏/+⊞(∈Rs⊟)..).Ps
Does this language ever look pretty? Great for signaling UFOs though :D
Ah, but the terseness of the code allows the beauty of the underlying algorithm to shine through :-)
Those unicode code points won’t use themselves.
Factor
: get-input ( -- rules updates ) "vocab:aoc-2024/05/input.txt" utf8 file-lines { "" } split1 "|" "," [ '[ [ _ split ] map ] ] bi@ bi* ; : relevant-rules ( rules update -- rules' ) '[ [ _ in? ] all? ] filter ; : compliant? ( rules update -- ? ) [ relevant-rules ] keep-under [ [ index* ] with map first2 < ] with all? ; : middle-number ( update -- n ) dup length 2 /i nth-of string>number ; : part1 ( -- n ) get-input [ compliant? ] with [ middle-number ] filter-map sum ; : compare-pages ( rules page1 page2 -- <=> ) [ 2array relevant-rules ] keep-under [ drop +eq+ ] [ first index zero? +gt+ +lt+ ? ] if-empty ; : correct-update ( rules update -- update' ) [ swapd compare-pages ] with sort-with ; : part2 ( -- n ) get-input dupd [ compliant? ] with reject [ correct-update middle-number ] with map-sum ;
Rust
Real thinker. Messed around with a couple solutions before this one. The gist is to take all the pairwise comparisons given and record them for easy access in a ranking matrix.
For the sample input, this grid would look like this (I left out all the non-present integers, but it would be a 98 x 98 grid where all the empty spaces are filled with
Ordering::Equal
):13 29 47 53 61 75 97 13 = > > > > > > 29 < = > > > > > 47 < < = < < > > 53 < < > = > > > 61 < < > < = > > 75 < < < < < = > 97 < < < < < < =
I discovered this can’t be used for a total order on the actual puzzle input because there were cycles in the pairs given (see how rust changed sort implementations as of 1.81). I used
usize
for convenience (I did it withu8
for all the pair values originally, but kept having to cast over and overas usize
). Didn’t notice a performance difference, but I’m sure uses a bit more memory.Also I Liked the
simple_grid
crate a little better than thegrid
one. Will have to refactor that out at some point.solution
use std::{cmp::Ordering, fs::read_to_string}; use simple_grid::Grid; type Idx = (usize, usize); type Matrix = Grid<Ordering>; type Page = Vec<usize>; fn parse_input(input: &str) -> (Vec<Idx>, Vec<Page>) { let split: Vec<&str> = input.split("\n\n").collect(); let (pair_str, page_str) = (split[0], split[1]); let pairs = parse_pairs(pair_str); let pages = parse_pages(page_str); (pairs, pages) } fn parse_pairs(input: &str) -> Vec<Idx> { input .lines() .map(|l| { let (a, b) = l.split_once('|').unwrap(); (a.parse().unwrap(), b.parse().unwrap()) }) .collect() } fn parse_pages(input: &str) -> Vec<Page> { input .lines() .map(|l| -> Page { l.split(",") .map(|d| d.parse::<usize>().expect("invalid digit")) .collect() }) .collect() } fn create_matrix(pairs: &[Idx]) -> Matrix { let max = *pairs .iter() .flat_map(|(a, b)| [a, b]) .max() .expect("iterator is non-empty") + 1; let mut matrix = Grid::new(max, max, vec![Ordering::Equal; max * max]); for (a, b) in pairs { matrix.replace_cell((*a, *b), Ordering::Less); matrix.replace_cell((*b, *a), Ordering::Greater); } matrix } fn valid_pages(pages: &[Page], matrix: &Matrix) -> usize { pages .iter() .filter_map(|p| { if check_order(p, matrix) { Some(p[p.len() / 2]) } else { None } }) .sum() } fn fix_invalid_pages(pages: &mut [Page], matrix: &Matrix) -> usize { pages .iter_mut() .filter(|p| !check_order(p, matrix)) .map(|v| { v.sort_by(|a, b| *matrix.get((*a, *b)).unwrap()); v[v.len() / 2] }) .sum() } fn check_order(page: &[usize], matrix: &Matrix) -> bool { page.is_sorted_by(|a, b| *matrix.get((*a, *b)).unwrap() == Ordering::Less) } pub fn solve() { let input = read_to_string("inputs/day05.txt").expect("read file"); let (pairs, mut pages) = parse_input(&input); let matrix = create_matrix(&pairs); println!("Part 1: {}", valid_pages(&pages, &matrix)); println!("Part 2: {}", fix_invalid_pages(&mut pages, &matrix)); }
On github
*Edit: I did try switching to just using
std::collections::HashMap
, but it was 0.1 ms slower on average than using thesimple_grid::Grid
…Vec[idx]
access is faster maybe?I think you may have over thought it, I just applied the rules by swapping unordered pairs until it was ordered :D cool solution though
Good old bubble sort
Its called AdventOfCode, not AdventOfEfficientCode :D
Very cool approach. I didn’t think that far. I just wrote a compare function and hoped for the best.
Dart
A bit easier than I first thought it was going to be.
I had a look at the Uiua discussion, and this one looks to be beyond my pay grade, so this will be it for today.
import 'package:collection/collection.dart'; import 'package:more/more.dart'; (int, int) solve(List<String> lines) { var parts = lines.splitAfter((e) => e == ''); var pred = SetMultimap.fromEntries(parts.first.skipLast(1).map((e) { var ps = e.split('|').map(int.parse); return MapEntry(ps.last, ps.first); })); ordering(a, b) => pred[a].contains(b) ? 1 : 0; var pageSets = parts.last.map((e) => e.split(',').map(int.parse).toList()); var partn = pageSets.partition((ps) => ps.isSorted(ordering)); return ( partn.truthy.map((e) => e[e.length ~/ 2]).sum, partn.falsey.map((e) => (e..sort(ordering))[e.length ~/ 2]).sum ); } part1(List<String> lines) => solve(lines).$1; part2(List<String> lines) => solve(lines).$2;
Rust
While part 1 was pretty quick, part 2 took me a while to figure something out. I figured that the relation would probably be a total ordering, and obtained the actual order using topological sorting. But it turns out the relation has cycles, so the topological sort must be limited to the elements that actually occur in the lists.
Solution
use std::collections::{HashSet, HashMap, VecDeque}; fn parse_lists(input: &str) -> Vec<Vec<u32>> { input.lines() .map(|l| l.split(',').map(|e| e.parse().unwrap()).collect()) .collect() } fn parse_relation(input: String) -> (HashSet<(u32, u32)>, Vec<Vec<u32>>) { let (ordering, lists) = input.split_once("\n\n").unwrap(); let relation = ordering.lines() .map(|l| { let (a, b) = l.split_once('|').unwrap(); (a.parse().unwrap(), b.parse().unwrap()) }) .collect(); (relation, parse_lists(lists)) } fn parse_graph(input: String) -> (Vec<Vec<u32>>, Vec<Vec<u32>>) { let (ordering, lists) = input.split_once("\n\n").unwrap(); let mut graph = Vec::new(); for l in ordering.lines() { let (a, b) = l.split_once('|').unwrap(); let v: u32 = a.parse().unwrap(); let w: u32 = b.parse().unwrap(); let new_len = v.max(w) as usize + 1; if new_len > graph.len() { graph.resize(new_len, Vec::new()) } graph[v as usize].push(w); } (graph, parse_lists(lists)) } fn part1(input: String) { let (relation, lists) = parse_relation(input); let mut sum = 0; for l in lists { let mut valid = true; for i in 0..l.len() { for j in 0..i { if relation.contains(&(l[i], l[j])) { valid = false; break } } if !valid { break } } if valid { sum += l[l.len() / 2]; } } println!("{sum}"); } // Topological order of graph, but limited to nodes in the set `subgraph`. // Otherwise the graph is not acyclic. fn topological_sort(graph: &[Vec<u32>], subgraph: &HashSet<u32>) -> Vec<u32> { let mut order = VecDeque::with_capacity(subgraph.len()); let mut marked = vec![false; graph.len()]; for &v in subgraph { if !marked[v as usize] { dfs(graph, subgraph, v as usize, &mut marked, &mut order) } } order.into() } fn dfs(graph: &[Vec<u32>], subgraph: &HashSet<u32>, v: usize, marked: &mut [bool], order: &mut VecDeque<u32>) { marked[v] = true; for &w in graph[v].iter().filter(|v| subgraph.contains(v)) { if !marked[w as usize] { dfs(graph, subgraph, w as usize, marked, order); } } order.push_front(v as u32); } fn rank(order: &[u32]) -> HashMap<u32, u32> { order.iter().enumerate().map(|(i, x)| (*x, i as u32)).collect() } // Part 1 with topological sorting, which is slower fn _part1(input: String) { let (graph, lists) = parse_graph(input); let mut sum = 0; for l in lists { let subgraph = HashSet::from_iter(l.iter().copied()); let rank = rank(&topological_sort(&graph, &subgraph)); if l.is_sorted_by_key(|x| rank[x]) { sum += l[l.len() / 2]; } } println!("{sum}"); } fn part2(input: String) { let (graph, lists) = parse_graph(input); let mut sum = 0; for mut l in lists { let subgraph = HashSet::from_iter(l.iter().copied()); let rank = rank(&topological_sort(&graph, &subgraph)); if !l.is_sorted_by_key(|x| rank[x]) { l.sort_unstable_by_key(|x| rank[x]); sum += l[l.len() / 2]; } } println!("{sum}"); } util::aoc_main!();
also on github
Lisp
Part 1 and 2
(defun p1-process-rules (line) (mapcar #'parse-integer (uiop:split-string line :separator "|"))) (defun p1-process-pages (line) (mapcar #'parse-integer (uiop:split-string line :separator ","))) (defun middle (pages) (nth (floor (length pages) 2) pages)) (defun check-rule-p (rule pages) (let ((p1 (position (car rule) pages)) (p2 (position (cadr rule) pages))) (or (not p1) (not p2) (< p1 p2)))) (defun ordered-p (pages rules) (loop for r in rules unless (check-rule-p r pages) return nil finally (return t))) (defun run-p1 (rules-file pages-file) (let ((rules (read-file rules-file #'p1-process-rules)) (pages (read-file pages-file #'p1-process-pages))) (loop for p in pages when (ordered-p p rules) sum (middle p) ))) (defun fix-pages (rules pages) (sort pages (lambda (p1 p2) (ordered-p (list p1 p2) rules)) )) (defun run-p2 (rules-file pages-file) (let ((rules (read-file rules-file #'p1-process-rules)) (pages (read-file pages-file #'p1-process-pages))) (loop for p in pages unless (ordered-p p rules) sum (middle (fix-pages rules p)) )))
Nim
import ../aoc, strutils, sequtils, tables type Rules = ref Table[int, seq[int]] #check if an update sequence is valid proc valid(update:seq[int], rules:Rules):bool = for pi, p in update: for r in rules.getOrDefault(p): let ri = update.find(r) if ri != -1 and ri < pi: return false return true proc backtrack(p:int, index:int, update:seq[int], rules: Rules, sorted: var seq[int]):bool = if index == 0: sorted[index] = p return true for r in rules.getOrDefault(p): if r in update and r.backtrack(index-1, update, rules, sorted): sorted[index] = p return true return false #fix an invalid sequence proc fix(update:seq[int], rules: Rules):seq[int] = echo "fixing", update var sorted = newSeqWith(update.len, 0); for p in update: if p.backtrack(update.len-1, update, rules, sorted): return sorted return @[] proc solve*(input:string): array[2,int] = let parts = input.split("\r\n\r\n"); let rulePairs = parts[0].splitLines.mapIt(it.strip.split('|').map(parseInt)) let updates = parts[1].splitLines.mapIt(it.split(',').map(parseInt)) # fill rules table var rules = new Rules for rp in rulePairs: if rules.hasKey(rp[0]): rules[rp[0]].add rp[1]; else: rules[rp[0]] = @[rp[1]] # fill reverse rules table var backRules = new Rules for rp in rulePairs: if backRules.hasKey(rp[1]): backRules[rp[1]].add rp[0]; else: backRules[rp[1]] = @[rp[0]] for u in updates: if u.valid(rules): result[0] += u[u.len div 2] else: let uf = u.fix(backRules) result[1] += uf[uf.len div 2]
I thought of doing a sort at first, but dismissed it for some reason, so I came up with this slow and bulky recursive backtracking thing which traverses the rules as a graph until it reaches a depth equal to the given sequence. Not my finest work, but it does solve the puzzle :)
Zig
const std = @import("std"); const List = std.ArrayList; const Map = std.AutoHashMap; const tokenizeScalar = std.mem.tokenizeScalar; const splitScalar = std.mem.splitScalar; const parseInt = std.fmt.parseInt; const print = std.debug.print; const contains = std.mem.containsAtLeast; const eql = std.mem.eql; var gpa = std.heap.GeneralPurposeAllocator(.{}){}; const alloc = gpa.allocator(); const Answer = struct { middle_sum: i32, reordered_sum: i32, }; pub fn solve(input: []const u8) !Answer { var rows = splitScalar(u8, input, '\n'); // key is a page number and value is a // list of pages to be printed before it var rules = Map(i32, List(i32)).init(alloc); var pages = List([]i32).init(alloc); defer { var iter = rules.iterator(); while (iter.next()) |rule| { rule.value_ptr.deinit(); } rules.deinit(); pages.deinit(); } var parse_rules = true; while (rows.next()) |row| { if (eql(u8, row, "")) { parse_rules = false; continue; } if (parse_rules) { var rule_pair = tokenizeScalar(u8, row, '|'); const rule = try rules.getOrPut(try parseInt(i32, rule_pair.next().?, 10)); if (!rule.found_existing) { rule.value_ptr.* = List(i32).init(alloc); } try rule.value_ptr.*.append(try parseInt(i32, rule_pair.next().?, 10)); } else { var page = List(i32).init(alloc); var page_list = tokenizeScalar(u8, row, ','); while (page_list.next()) |list| { try page.append(try parseInt(i32, list, 10)); } try pages.append(try page.toOwnedSlice()); } } var middle_sum: i32 = 0; var reordered_sum: i32 = 0; var wrong_order = false; for (pages.items) |page| { var index: usize = page.len - 1; while (index > 0) : (index -= 1) { var page_rule = rules.get(page[index]) orelse continue; // check the rest of the pages var remaining: usize = 0; while (remaining < page[0..index].len) { if (contains(i32, page_rule.items, 1, &[_]i32{page[remaining]})) { // re-order the wrong page const element = page[remaining]; page[remaining] = page[index]; page[index] = element; wrong_order = true; if (rules.get(element)) |next_rule| { page_rule = next_rule; } continue; } remaining += 1; } } if (wrong_order) { reordered_sum += page[(page.len - 1) / 2]; wrong_order = false; } else { // middle page number middle_sum += page[(page.len - 1) / 2]; } } return Answer{ .middle_sum = middle_sum, .reordered_sum = reordered_sum }; } pub fn main() !void { const answer = try solve(@embedFile("input.txt")); print("Part 1: {d}\n", .{answer.middle_sum}); print("Part 2: {d}\n", .{answer.reordered_sum}); } test "test input" { const answer = try solve(@embedFile("test.txt")); try std.testing.expectEqual(143, answer.middle_sum); try std.testing.expectEqual(123, answer.reordered_sum); }
Haskell
I should probably have used
sortBy
instead of this ad-hoc selection sort.import Control.Arrow import Control.Monad import Data.Char import Data.List qualified as L import Data.Map import Data.Set import Data.Set qualified as S import Text.ParserCombinators.ReadP parse = (,) <$> (fromListWith S.union <$> parseOrder) <*> (eol *> parseUpdate) parseOrder = endBy (flip (,) <$> (S.singleton <$> parseInt <* char '|') <*> parseInt) eol parseUpdate = endBy (sepBy parseInt (char ',')) eol parseInt = read <$> munch1 isDigit eol = char '\n' verify :: Map Int (Set Int) -> [Int] -> Bool verify m = and . (zipWith fn <*> scanl (flip S.insert) S.empty) where fn a = flip S.isSubsetOf (findWithDefault S.empty a m) getMiddle = ap (!!) ((`div` 2) . length) part1 m = sum . fmap getMiddle getOrigin :: Map Int (Set Int) -> Set Int -> Int getOrigin m l = head $ L.filter (S.disjoint l . preds) (S.toList l) where preds = flip (findWithDefault S.empty) m order :: Map Int (Set Int) -> Set Int -> [Int] order m s | S.null s = [] | otherwise = h : order m (S.delete h s) where h = getOrigin m s part2 m = sum . fmap (getMiddle . order m . S.fromList) main = getContents >>= print . uncurry runParts . fst . last . readP_to_S parse runParts m = L.partition (verify m) >>> (part1 m *** part2 m)
I was very much unhappy because my previous implementation took 1 second to execute and trashed through 2GB RAM in the process of doing so, I sat down again with some inspiration about the sorting approach.
I am very much happy now, the profiler tells me that most of time is spend in the parsing functions now.I am also grateful for everyone else doing haskell, this way I learned about Arrays, Bifunctors and Arrows which (I think) improved my code a lot.
Haskell
import Control.Arrow hiding (first, second) import Data.Map (Map) import Data.Set (Set) import Data.Bifunctor import qualified Data.Maybe as Maybe import qualified Data.List as List import qualified Data.Map as Map import qualified Data.Set as Set import qualified Data.Ord as Ord parseRule :: String -> (Int, Int) parseRule s = (read . take 2 &&& read . drop 3) s replace t r c = if t == c then r else c parse :: String -> (Map Int (Set Int), [[Int]]) parse s = (map parseRule >>> buildRuleMap $ rules, map (map read . words) updates) where rules = takeWhile (/= "") . lines $ s updates = init . map (map (replace ',' ' ')) . drop 1 . dropWhile (/= "") . lines $ s middleElement :: [a] -> a middleElement us = (us !!) $ (length us `div` 2) ruleGroup :: Eq a => (a, b) -> (a, b') -> Bool ruleGroup = curry (uncurry (==) <<< fst *** fst) buildRuleMap :: [(Int, Int)] -> Map Int (Set Int) buildRuleMap rs = List.sortOn fst >>> List.groupBy ruleGroup >>> map ((fst . head) &&& map snd) >>> map (second Set.fromList) >>> Map.fromList $ rs elementSort :: Map Int (Set Int) -> Int -> Int -> Ordering elementSort rs a b | Maybe.maybe False (Set.member b) (rs Map.!? a) = LT | Maybe.maybe False (Set.member a) (rs Map.!? b) = GT | otherwise = EQ isOrdered rs u = (List.sortBy (elementSort rs) u) == u part1 (rs, us) = filter (isOrdered rs) >>> map middleElement >>> sum $ us part2 (rs, us) = filter (isOrdered rs >>> not) >>> map (List.sortBy (elementSort rs)) >>> map middleElement >>> sum $ us main = getContents >>= print . (part1 &&& part2) . parse
Python
Also took advantage of
cmp_to_key
.from functools import cmp_to_key from pathlib import Path def parse_input(input: str) -> tuple[dict[int, list[int]], list[list[int]]]: rules, updates = tuple(input.strip().split("\n\n")[:2]) order = {} for entry in rules.splitlines(): values = entry.split("|") order.setdefault(int(values[0]), []).append(int(values[1])) updates = [[int(v) for v in u.split(",")] for u in updates.splitlines()] return (order, updates) def is_ordered(update: list[int], order: dict[int, list[int]]) -> bool: return update == sorted( update, key=cmp_to_key(lambda a, b: 1 if a in order.setdefault(b, []) else -1) ) def part_one(input: str) -> int: order, updates = parse_input(input) return sum([u[len(u) // 2] for u in (u for u in updates if is_ordered(u, order))]) def part_two(input: str) -> int: order, updates = parse_input(input) return sum( [ sorted(u, key=cmp_to_key(lambda a, b: 1 if a in order[b] else -1))[ len(u) // 2 ] for u in (u for u in updates if not is_ordered(u, order)) ] ) if __name__ == "__main__": input = Path("input").read_text("utf-8") print(part_one(input)) print(part_two(input))
C
I got the question so wrong - I thought a|b and b|c would imply a|c so I went and used dynamic programming to propagate indirect relations through a table.
It worked beautifully but not for the input, which doesn’t describe an absolute global ordering at all. It may well give a|c and b|c AND c|a. Nothing can be deduced then, and nothing needs to, because all required relations are directly specified.
The table works great though, the sort comparator is a simple 2D array index, so O(1).
Code
#include "common.h" #define TSZ 100 #define ASZ 32 /* tab[a][b] is -1 if a<b and 1 if a>b */ static int8_t tab[TSZ][TSZ]; static int cmp(const void *a, const void *b) { return tab[*(const int *)a][*(const int *)b]; } int main(int argc, char **argv) { char buf[128], *rest, *tok; int p1=0,p2=0, arr[ASZ],srt[ASZ], n,i, a,b; if (argc > 1) DISCARD(freopen(argv[1], "r", stdin)); while (fgets(buf, sizeof(buf), stdin)) { if (sscanf(buf, "%d|%d", &a, &b) != 2) break; assert(a>=0); assert(a<TSZ); assert(b>=0); assert(b<TSZ); tab[a][b] = -(tab[b][a] = 1); } while ((rest = fgets(buf, sizeof(buf), stdin))) { for (n=0; (tok = strsep(&rest, ",")); n++) { assert(n < (int)LEN(arr)); sscanf(tok, "%d", &arr[n]); } memcpy(srt, arr, n*sizeof(*srt)); qsort(srt, n, sizeof(*srt), cmp); *(memcmp(srt, arr, n*sizeof(*srt)) ? &p1 : &p2) += srt[n/2]; } printf("05: %d %d\n", p1, p2); return 0; }
Same, I initially also thought a|b and a|c implies a|c. However when I drew the graph of the example on paper, I suspected that all relations will be given, and coded it with that assumption, that turned out to be correct
Python
sort using a compare function
from math import floor from pathlib import Path from functools import cmp_to_key cwd = Path(__file__).parent def parse_protocol(path): with path.open("r") as fp: data = fp.read().splitlines() rules = data[:data.index('')] page_to_rule = {r.split('|')[0]:[] for r in rules} [page_to_rule[r.split('|')[0]].append(r.split('|')[1]) for r in rules] updates = list(map(lambda x: x.split(','), data[data.index('')+1:])) return page_to_rule, updates def sort_pages(pages, page_to_rule): compare_pages = lambda page1, page2:\ 0 if page1 not in page_to_rule or page2 not in page_to_rule[page1] else -1 return sorted(pages, key = cmp_to_key(compare_pages)) def solve_problem(file_name, fix): page_to_rule, updates = parse_protocol(Path(cwd, file_name)) to_print = [temp_p[int(floor(len(pages)/2))] for pages in updates if (not fix and (temp_p:=pages) == sort_pages(pages, page_to_rule)) or (fix and (temp_p:=sort_pages(pages, page_to_rule)) != pages)] return sum(map(int,to_print))
No need for
floor
, you can just uselen(pages) // 2
.nice one thanks