Not much going on in part one. We need to extract the digits in each line and then add together \(10\times\) all the first digits and all the last digits.
Code
data = [[int(char) for char in line if char in"0123456879"] for line in load(1)]10*sum(x[0] for x in data) +sum(x[-1] for x in data)
Part 2
For part two, we need to work with the string representation of the numbers. The examples show that the numbers can overlap, so we want a string like “fiveight” to show a “5” first and then an “8”.
We are only interested in the first and last digits of the string, so this could be done using a sliding window. Or we could hack it by padding the string representation of the number and doing a search and replace:
Code
number_names = [ ("one", "one1one"), ("two", "two2two"), ("three", "three3three"), ("four", "four4four"), ("five", "five5five"), ("six", "six6six"), ("seven", "seven7seven"), ("eight", "eight8eight"), ("nine", "nine9nine"),]data = load(1, "raw")for pair in number_names: data = data.replace(pair[0], pair[1])data = data.split("\n")[:-1]data = [[int(char) for char in line if char in"0123456879"] for line in data]10*sum(x[0] for x in data) +sum(x[-1] for x in data)
The most fiddly task in part 1 is parsing the input. Each line is a single game, with the game id appearing first and then the game outcome, separated by a colon. Each game consists of multiple rounds (delimited by a semicolon) and each round reveals multiple colours (delimited by commas).
So we split on the colon to separate the game id from the outcome, then split the outcome on semicolons to get each round, and finally split each round on commas to find the colours. A bit of regex let’s us finally get to a list of three integers as the representation of a round.
Once we have that, we can compare each round in a game with the maximum number of [red, green, blue] cubes available and see if it is possible. A game fails if any single round is impossible.
Code
def parse_game_round(game_round): colormap = {"red": 0, "green": 1, "blue": 2} regex =r"([0-9]*) (red|green|blue)" result = [0, 0, 0]for v, c in [re.search(regex, entry).groups() for entry in game_round.split(",")]: result[colormap[c]] =int(v)return resulttotal =0games = [ np.array([parse_game_round(x) for x in line.split(":")[1].split(";")])for line in load(2)]for idx, game inenumerate(games):if ((game - [12, 13, 14]) <=0).all(): total += idx +1total
Part 2
With part 1 out of the way, part 2 is trivial: we get the same representation for each game as before, and just calculate the maximum for each coordinate in any round, and then multiply those three numbers together.
Code
sum(np.product(game.max(axis=0)) for game in games)
It seems the theme for AOC this year is “let’s make things annoying to parse”.
We’ll need to extract the values and locations of all the numbers in the grid, and then compare that with the locations of the symbols. To get the coordinates that neighbor a symbol we can use a neat convolution trick. To get the coordinates of each number we’ll loop over all the lines in the grid, and use a regex to find the numbers.
Once we have that, we can find the desired value by finding the set intersetion of the number-coordinates and the neighbor coordinates; if that’s non-empty, we add the number to a running total.
Code
data = load(3)symbols = np.array([[(c notin"0123456789") and c !="."for c in l] for l in data])w = [[1, 1, 1], [1, 0, 1], [1, 1, 1]]neighbors = scipy.ndimage.convolve(symbols, w, mode="constant")neighbors =set(zip(*np.where(neighbors)))numbers = {}for i, row inenumerate(data):for n in re.finditer("\d+", row): numbers[frozenset((i, j) for j inrange(*n.span()))] =int(n.group())sum(numbers[key] if key & neighbors else0for key in numbers)
Part 2
After part 1, part 2 is pretty simple. We can use the same (coordinates) -> number mapping as before, and then just loop over all locations in the grid that have a value of “*”. We find the neighbors of each star, intersect with all the numbers coordinates, and only use the ones that intersect exactly two numbers
Code
total =0offsets = np.array(list(zip(*np.where(w)))) -1for star inzip(*np.where(np.array([[c for c in l] for l in data]) =="*")): neighbors =set([tuple(x) for x in star + offsets]) values = [numbers[key] for key in numbers if key & neighbors] total += values[0] * values[1] iflen(values) ==2else0total
After the interesting parsing tasks of the last few days, today was straightforward. Part one can be boiled down to this oneliner, which I don’t even think is completely illegible
Code
sum(int(2** (len(set(row[1:11]) &set(row[11:])) -1)) for row in load(4, "int"))
Part 2
I could have found some way of saving the intersections from part 1 so that I didn’t have to recalculate in part two, but it’s not that complicated.
Code
data = load(4, "int")counts = np.ones(len(data), dtype=int)for i, row inenumerate(data): wins =len(set(row[1:11]) &set(row[11:])) counts[i+1:i+wins+1] += counts[i]sum(counts)
The first part is a straightforward implementation of the requirements.
To parse the file, we first split on “” to get each of the sections separately, then for each line of each section, we extract all of the integers. These are all positive, so we can do that with the regex “(+̣)”. After skipping through lines which don’t contain integers, we have a sensible representation for our data.
After that it’s just a question of following through what happens to each initial value: for each one we scan through the rulesets in order, and when we find a rule in a ruleset that matches we convert it to the new value and move on. If we don’t find a rule that matches we’re told that the converted value is the same as the original one.
Code
[seeds], *rules = [ [ [int(x) for x in ints]for line in groups.split("\n")if (ints := re.findall("(\d+)", line)) ]for groups in load(5, "raw").split("\n\n")]minval = np.inffor seed in seeds: current = seedfor ruleset in rules:for destination, source, length in ruleset:if source <= current < source + length: current = current + destination - sourcebreakif current < minval: minval = currentminval
Part 2
For part two we need to be a bit cleverer. We know that each rule converts a specific source range to a specific destination range. So to apply a rule to an arbitrary range, we split the range into three: The parts of the range before the rule applies, the parts of the range that intersect the rule and the parts of the range after the rule. Some of these parts can be empty, but that’s OK.
From there, building a routine to iteratively apply each ruleset to the original ranges is not too tricky.
Code
def split_range(r, rule):return [ x if x[0] < x[1] else []for x in [ (r[0], min(rule[0], r[1])), (max(r[0], rule[0]), min(r[1], rule[1])), (max(rule[1], r[0]), r[1]), ] ]def split_ranges(ranges, rule): dest, src, length = rule done, todo = [], []for l, m, r in [split_range(r, [src, src + length]) for r in ranges]: todo += [l] if l else [] todo += [r] if r else [] done += [(m[0] + dest - src, m[1] + dest - src)] if m else []return done, todoranges = [(start, start + l) for start, l in more_itertools.chunked(seeds, 2)]for ruleset in rules: todo = ranges ranges = []for rule in ruleset: new_ranges, todo = split_ranges(todo, rule) ranges += new_ranges ranges += todomin(x[0] for x in ranges)
We can set up an equation for how far the boat will move in a given time, \(t\) when waiting for a given period \(w\) at the start, to wit \[
d(t, w) = w(t-w)
\]
We are interested in which values of \(w\) give \(d(t_0, w) > d_0\), for the \(d_0, t_0\) pairs we are given in the input, which is the same as exploring when the parabola described by \(-w^2 +wt_0 - d_0\) is positive. This parabola has a maximum at \(w = \frac{t_0}{2}\), and it’s positive region (if any) will lie between the two roots. The roots are given by
t =int("".join([str(x) for x in ts]))d =int("".join([str(x) for x in ds]))Δ = np.sqrt(t**2-4* d)np.floor(t /2+ Δ /2-1e-10) - np.ceil(t /2- Δ /2+1e-10) +1
This feels doable. The key is to find a method to compare two hands of cards. We can use np.unique to get the count of how many times each unique value appears in the hand, which is almost exactly what we need. If we sort this count, then two hands will compare correctly if we compare their count tuples, since tuples sort lexicographically. The final comparator is [counts, [card_value for card in hand]], to correctly sort hands of the same type but with different values.
Code
def counts(hand, part=1): hand = [x for x in hand if x !="J"] if part ==2else hand _, counts = np.unique([x for x in hand], return_counts=True) counts =sorted(counts, reverse=True) if hand else [0] counts[0] +=5-len(hand)return countsdata = [x.strip().split() for x in load(7)]order =sorted( data, key=lambda row: [counts(row[0]), ["23456789TJQKA".index(c) for c in row[0]]])sum([int(x[1]) for x in order] * np.arange(1, 1+len(order)))
Part 2
Part 2 was similar enough to part 1 that I just made a flag in the counts function and changed the order of the card values
Code
order =sorted( data, key=lambda row: [counts(row[0], part=2), ["J23456789TQKA".index(c) for c in row[0]]],)sum([int(x[1]) for x in order] * np.arange(1, 1+len(order)))
For part 1 we build a dictionary of left, right instructions for each node, which makes following a path from start to end easy.
Code
instructions, lines = load(8, "raw").split("\n\n")instructions = instructions.strip()data = [words for line in lines.split("\n") if (words := re.findall("[A-Z]+", line))]nodes = {node: {"L": left, "R": right} for node, left, right in data}node ="AAA"i =0while node !="ZZZ": node = nodes[node][instructions[i %len(instructions)]] i +=1i
Part 2
Part 2 screams cycle checking, and indeed it is. The state at any given time is given by the current node and the index of the instruction list. If we ever see the same state twice, we know we’re in a cycle, and can figure out the period. All of the cycles turn out to have periods that match the offset from the start, so we can just use the lcm to find the common period. If some of the cycles had had a different offset, we would have need the full chinese remainder theorem.
Code
import mathi =0periods = []def find_cycle(node): seen = {} i =0 z =Nonewhile (node, i %len(instructions)) notin seen: seen[node, i %len(instructions)] = i node = nodes[node][instructions[i %len(instructions)]] i +=1if node[-1] =="Z": z = i period = i - seen[(node, i %len(instructions))]return period, z % periodperiods, congruences =list(zip(*[find_cycle(node) for node in nodes if node[-1] =="A"]))math.lcm(*periods)
This one’s pretty straightforward: calculate all the needed differences; add the last element of each difference to the calculation.
Code
def score(line, part=1): total =0if part ==2: line = line[::-1]whileany(line): total += line[-1] line = [line[i] - line[i -1] for i inrange(1, len(line))]return totalsum(score(line) for line in load(9, "int"))
Part 2
…and part 2 is simple enough that it can be included in part 1 with a flag
Code
sum(score(line, part=2) for line in load(9, "int"))
For part 1 we find the two directions leading away from the starting point, and follow the path along each one at the same pace. The point where they overlap is the point furthest away from the start, so we return that.
For part 2, we need some way of distinguishing points inside from outside the circuit. Since the lines that make up the boundary of the circuit never cross, this is a point in polygon problem. We could solve it by raytracing: for every point in the polygon we can draw all rays to the outside edge and see if they cross the boundary of the polygon an odd number of times. If they do, the point is inside the polygon. We could also look at the winding number of the polygon with respect the the point: points inside will have a nonzero winding number, while points outside will have a positive winding number.
Ultimately, what I ended up doing was just blowing up the grid to double size, flood filling the outside and looking at the even coordinate values of whatever was left. It’s stupid, but it works.
I liked this puzzle, and I feel that I managed to come up with an OK slick array solution. We can get the coordinates of the original galaxies and the empty rows using np.where. For each empty row we can increase the first coordinate of the galaxies below it by some amount, and, mutatis mutandis, we can do the same for the empty columns.
That gives us a set of new coordinates, and we need to find the sum of all the manhattan distances from one point to the others. for any pair of points \(i, j\), that’s \(\left|x_i - x_j\right| +\left|y_i - y_j\right|\); we can construct the entire matrix by taking the row vector of coordinates, and subtracting from it the column vector of the same coordinates and relying on numpy’s broadcasting magic.
Code
def solve(s=1): y, x = np.where(data =="#") empty_r = [i for i inrange(len(data)) ifall(data[i] ==".")] empty_c = [i for i inrange(len(data)) ifall(data[:, i] ==".")] new_y = y + s * np.array([y > empty_r[i] for i inrange(len(empty_r))]).sum(axis=0) new_x = x + s * np.array([x > empty_c[i] for i inrange(len(empty_c))]).sum(axis=0)return (abs(new_y - new_y.reshape(-1, 1)) +abs(new_x - new_x.reshape(-1, 1)) ).sum() //2data = load(11, "chararray")solve()
Part 2
The second part is pretty trivially included in the first
The core of this solution is the count function, which takes a tuple of ints representing the three states (off, ambiguous and on), as well as a tuple of block lengths, and returns the number of assignments of the ambiguous values that work.
It’s recursive, with the following base cases; the third is checked last:
If the number of on values is more than the sum of block lengths, no assignments are possible
If the sum of the number of on values and ambiguous values is less than the sum of block lengths, no assignments are possible
If the sum of block lengths is zero, exactly one assignment is possible
Otherwise, if the first character is off, then the count is the same as the count ignoring that assignment and we can recurse.
If the first character is on, we can check whether the first l characters would fit the first block, and the l+1’th character is either the end of the string or compatible with an off state. If it is, the count is the same as the count for the remainder of the string on the remainder of the blocks and we can recurse.
Finally, if the first character is ambiguous, the count is the sum of the counts for the two possible assignments of the character, and we can recurse.
Code
def parse(line): s, groups = line.strip().split(" ") lookup = {"#": 2, "?": 1, ".": 0}returntuple(lookup[char] for char in s), tuple(int(g) for g in groups.split(","))data = [parse(x) for x in load(12)]def match_beginning(data, length):returnall(x >0for x in data[:length]) and ( (len(data) == length) or data[length] <2 )@functools.cachedef count(data, blocks): total =sum(blocks) minimum =sum(x ==2for x in data) maximum =sum(x >0for x in data)if minimum > total or maximum < total:return0if total ==0:return1if data[0] ==0:return count(data[1:], blocks)if data[0] ==2: l = blocks[0]if match_beginning(data, l):if l ==len(data):return1return count(data[l +1 :], blocks[1:])return0return count(data[1:], blocks) + count((2,) + data[1:], blocks)sum(count(*line) for line in data)
Part 2
With the memoization added to part 1, part 2 runs in 8s with no changes needed. Not great, but not terrible
Code
sum(count(((chars + (1,)) *5)[:-1], blocks *5) for chars, blocks in data)
This wasn’t too tricky. The idea is that we test all horizontal lines of reflection to see if there are any that match the given condition; if none are found, we rotate the array by 90 degrees clockwise and try again. For part 1, the test is that the two halves should line up exactly after flipping.
The only bit that requires some thought is how to account for the points beyond the top/bottom edge. We do that by saying that the number of lines on either side of the mirror line is the shortest distance to the top/bottom edge, so that only relevant lines are compared.
Code
def find_reflection(array, part=1):if part ==1: test =lambda a, b: (a == b[::-1]).all()else: test =lambda a, b: (a != b[::-1]).sum() ==1for i inrange(1, len(array)): l =min(len(array) - i, i)if test(array[i - l : i], array[i : i + l]):return ireturnNonearrays = [ np.array([[char for char in line.strip()] for line in array.split("\n")])for array in load(13, "raw").split("\n\n")]sum(100* yif (y := find_reflection(array)) isnotNoneelse find_reflection(np.rot90(array, -1))for array in arrays)
Part 2
Part 2 is so similar to part 1 that we can include it as a flag there; instead of a perfect match, the test is that exactly one pair of elements should be different on the two sides of the mirror line. Conceptually, that means that the sum of the differences should be exactly 1.
Part 1 needs a bit of thought. I’ll represent the data as a numpy array, with -1 corresponding to unmoveable rock, 0 to rolling rock, and 1 to empty space. To roll all the rocks northwards, we should focus one column at a time, and between every pair of unmoveable rocks, we sort the intervening data.
Once that’s done, we can just score the whole array
Code
map_ = {"#": -1, "O": 0, ".": 1}array = np.array([[map_[char] for char in line] for line in load(14)])nrows, ncols = array.shapedef score(array): rolls = np.where(array ==0)[0]return (nrows - rolls).sum()def roll(array):for i inrange(ncols): rocks = [-1] +list(np.where(array[:, i] ==-1)[0]) + [None]for j inrange(len(rocks) -1): left, right = rocks[j] +1, rocks[j +1] array[left:right, i] = np.sort(array[left:right, i])return arrayscore(roll(array))
Part 2
The numbers in part 2 are ridiculous enough that we obviously have to hope for some pattern in how the rocks move. We’ll store a fingerprint of the current state, and after each cycle, check if we’re in a state we’ve seen before. If we are, we’ve found a cycle and can skip straight to the end.
Code
def cycle(array):for i inrange(4): array = roll(array) array = np.rot90(array, -1)return arraydef hash_(array):returntuple(array.ravel())seen, scores = {}, {}maxval =1_000_000_000for i inrange(maxval): h = hash_(array)if h in seen:break seen[h] = i scores[i] = score(array) array = cycle(array)cycle_length = i - seen[h]index = seen[h] + (maxval - seen[h]) % cycle_lengthscores[index]
Part 1 can be done with a single expression. Always nice when that happens. I originally had both the hash function and the data loading directly in the sum generator expression, but I needed them for part two so I pulled them out to their own lines.
Code
def hash_(s):return functools.reduce(lambda x, y: (17* (x +ord(y))) %256, s, 0)instructions = load(15, "raw").split(",")sum(hash_(i) for i in instructions)
Part 2
Part 2 is fiddly and less fun. We need to run through each of the instructions and apply the procedure described. There might be better ways than this, but the below works:
Code
boxes = [{} for i inrange(256)]for instruction in instructions: label, f = instruction.replace("-", "=").split("=") destination = hash_(label)if"="in instruction: boxes[destination][label] =int(f)elif label in boxes[destination]:del boxes[destination][label]def score(box):return (list(box.values()) * np.arange(1, len(box) +1)).sum()int(sum(np.arange(1, 257) * [score(box) for box in boxes]))
Part 1 is fairly straightforward. We’ll need a way of tracking states we’ve already seen, and a recipe for moving from one state to the next. A state consists of a (position, direction) pair; if we ever hit a position and direction we’ve seen before we know we’re not going to do anything new (and that there’s an infinite loop in the light circuit).
We’ll store the grid as a dictionary of coordinates -> value, with the x and y coordinates encoded as a single complex number. That makes checking for when we’ve left the edge of the grid easy; we just have to check if the current coordinates are in the dictionary.
The diagonal mirrors transpose the coordinates of our direction, so that horizontal movement becomes vertical and vice versa. The beam splitters force us into vertical/horizontal movement and make us add an extra beam to the queue we’re going through.
Code
data = load(16, "chararray")grid = {1j* y + x: data[y, x] for x inrange(data.shape[1]) for y inrange(data.shape[0])}def count_points(position, direction): positions = deque([(position, direction)]) seen =set()while positions: position, direction = positions.popleft()while position in grid:if (position, direction) in seen:break seen.add((position, direction)) char = grid[position]if char in"/\\": direction =int(direction.imag) +1j*int(direction.real) direction *=-1if char =="/"else1elif char =="-"and direction.imag: positions.append((position -1, -1)) direction =1elif char =="|"and direction.real: positions.append((position -1j, -1j)) direction =1j position += directionreturnlen(set(x[0] for x in seen))count_points(0, 1)
Part 2
For part two we could probably do some clever memoization by making the above function recurse on beam splitters.
One potential disadvantage of that is that the input might contain infinite loops of light which require global information to be discovered. Passing this information to the memoized function would mean that we almost never get a cache hit, while not passing it risks getting stuck in an infinite loop.
Finally, brute force runs in an acceptable amount of time:
Code
def starting_position(direction, length): offset = data.shape[0] if (direction.imag + direction.real) <0else0 offset *=1jif direction.imag else1return offset + (1jif direction.real else1) * lengthmax( count_points(starting_position(direction, x), direction)for x inrange(data.shape[0])for direction in (-1, 1, -1j, 1j))
Our first pathfinding task! I couldn’t think of a good & simple-to-calculate heuristic, so we’ll just go with Dijkstra instead of A*.
We’re given the restriction that a cart must turn after at most three moves. The cost for a move can just be read out from the input grid, so the only question is how we represent a state and what the neighbors of each state are.
With the turning restriction, it makes sense to focus on where the cart turns, rather than where it moves. This means that a state can be represented by a tuple of (y, x, direction), where direction is 0 if the cart is moving vertically after the turn and 1 if it’s moving horizontally. The neighbors of a state are then the places where the cart could have its next turn, i.e. the points up to three tiles away vertically or horizontally, with movement along the other axis after the turn.
Implementing it looks like this:
Code
def navigate(grid, minval=1, maxval=3): q = PriorityQueue() max_y, max_x = (v -1for v in grid.shape) goal = max_y, max_x q.put((0, (0, 0, 0))) q.put((0, (0, 0, 1))) seen =set()while q: cost, (y, x, direction) = q.get()if (y, x) == goal:breakif (y, x, direction) in seen:continue seen.add((y, x, direction)) original_cost = costfor s in [-1, 1]: cost = original_cost new_y, new_x = y, xfor i inrange(1, maxval +1):if direction ==1: new_x = x + i * selse: new_y = y + i * sif new_x <0or new_y <0or new_x > max_x or new_y > max_y:break cost += grid[new_y, new_x]if ((new_y, new_x, 1- direction)) in seen:continueif i >= minval: q.put((cost, (new_y, new_x, 1- direction)))return costgrid = np.array([[int(char) for char in line.strip()] for line in load(17)])navigate(grid)
Part 2
Part 2 can be incorporated into the above code by changing the maximum number of allowed moves before a turn, and by only allowing turns after the crucible has moved at least minval steps. That’s easy to incorporate into the function for part 1, so the code just looks like:
We’ll run through the list of instructions and make a list of the coordinates of all the vertices. Then we’ll use the shoelace formula to calculate the area of the described polygon. To account for the line of paint at the edge of the polygon, we’ll add half the perimeter of the polygon (which corresponds to increasing the total thickness of the polygon by 1 everywhere), and finally add one to correct for the missing paint in the exterior corners. There will always be four more exterior corners than interior corners, since going all the way around the polygon once is a 360 degree rotation.
workflows, parts = [x.split("\n") for x in load(19, "raw").split("\n\n")]Part = namedtuple("Part", "x m a s")parts = [Part(*[int(x) for x in re.findall(r"\+?-?\d+", part)]) for part in parts]name, instructions = workflows[0][:-1].split("{")instruction_map = {}for name, instructions inmap(lambda x: x[:-1].split("{"), workflows): rules = [ rule.split(":") if":"in rule else [lambda y: True, rule]for rule in instructions.split(",") ] rules = [ (eval("lambda y: y."+ rule[0]), rule[1]) ifisinstance(rule[0], str) else rulefor rule in rules ] instruction_map[name] = instructions, rulesdef follow(part, state): s, tests = instruction_map[state]for test, output in tests:if test(part):if output =="A":returnTrueif output =="R":returnFalsereturn follow(part, output)sum(x.x + x.m + x.a + x.s for x in parts if follow(x, "in"))
Part 2
So, I thought I was being clever above by eval’ing the instructions to convert them directly to python functions. Unfortunately we have to modify the above approach to work with ranges of values rather than specific values, which has the effect of converting the filters from pass/faill to ones that split ranges in two - the parts that pass the filter, and the parts that don’t. I could go back and rework how I parsed the instructions in part 1 to fit with what I need to do for part 2, or I could just redo the parsing.
I’ll store each range as a dataclass of tuples, and each filter splits a range into at most two parts. The are half-open on the left, so that x = tuple(0, 5) represents the values [1, 2, 3, 4, 5]. The fact that they’re half-open lets us find the size of a range by just subtracting the endpoints, without having to fiddle with off-by one errors, and I chose to do it on the left so that the inital range can be represented by the pair of numbers (0, 4000) rather than the more ugly (1, 4001).
Code
@dataclasses.dataclassclass PartRange: x: tuple[int] m: tuple[int] a: tuple[int] s: tuple[int]def is_valid(interval):return interval[1] > interval[0]def split_coords(part_range, coordinate, cutoff, test): coord_range =getattr(part_range, coordinate)if test ==">": new_coords = ((cutoff, coord_range[1]), (coord_range[0], cutoff))elif test =="<": new_coords = ((coord_range[0], cutoff -1), (cutoff -1, coord_range[1]))else:raiseValueError("This shouldn't be possible") result = []for interval in new_coords:ifnot is_valid(interval): result.append(None)else: result.append(dataclasses.replace(part_range, **{coordinate: interval}))return resultdef size(part_range):return np.product([np.diff(getattr(part_range, i)) for i in"xmas"])def parse_rule_string(rule):if":"notin rule:return rule coordinate = rule[0] test = rule[1] cutoff, destination = rule[2:].split(":")return coordinate, int(cutoff), test, destinationinstructions = {}for name, rules inmap(lambda x: x[:-1].split("{"), workflows): instructions[name] = [parse_rule_string(rule) for rule in rules.split(",")]def follow(instructions):def inner(part_range, state):if state =="R":return0if state =="A":return size(part_range) rules = instructions[state] result =0 remainder = part_rangefor rule in rules:if remainder isNone:breakifisinstance(rule, str): result += inner(remainder, rule)breakelse: coordinate, cutoff, test, destination = rule passed, remainder = split_coords(remainder, coordinate, cutoff, test) result += inner(passed, destination) if passed isnotNoneelse0return result initial = PartRange((0, 4000), (0, 4000), (0, 4000), (0, 4000))return inner(initial, "in")follow(instructions)
# for module in modules:# modules[module].reset()bus = [("button", modules["broadcaster"], 0)]while bus: source, destination, signal = bus.pop(0)print(source, destination.name, signal, "conjunction"ifhasattr(destination, "signals") else"flipflop") destination.recv(source, signal)
Code
import matplotlib.pyplot as pltG = nx.from_dict_of_lists({strip_prefix(k) : v for k, v in network.items()}, create_using=nx.DiGraph)nodes, prefices =zip(*([(strip_prefix(k), k[0]) for k in network] + [("rx", "d")]))colormap = {"%": "#1f77b4", "&": "#ff7f0e", "b": "#2ca02c", "d": "#d62728"}colors = [colormap[prefix] for prefix in prefices]nx.draw_networkx(G, nodelist=nodes, node_color=colors, arrows=True, pos=nx.bfs_layout(G))plt.tight_layout()plt.axis("off")
Code
import matplotlibfor c in plt.cm.tab10.colors: print(matplotlib.colors.to_hex(c))
data = np.array([list(line) for line in load(21)])data = np.pad(data, 1, constant_values ="#")y, x = [x[0] for x in np.where(data =="S")]data[y, x] ="."def step(positions): result =set()for y, x in positions:for dy in [-1, 1]:if data[y + dy, x] ==".": result.add((y + dy, x))for dx in [-1, 1]:if data[y, x + dx] ==".": result.add((y, x + dx))return resultpositions = [(y, x)]for _ inrange(64): positions = step(positions)len(positions)
Source Code
---title: 2023 Solutions---# Imports```{python}# | eval: true# | output: falseimport dataclassesimport functoolsimport itertoolsimport osimport reimport sysfrom collections import defaultdict, deque, namedtuplefrom queue import PriorityQueueimport more_itertoolsimport numpy as npimport pandas as pdimport scipysys.path.insert(1, "..")import utilsload = utils.year_load(2023)```# [Day 1: Trebuchet?!](https://adventofcode.com/2023/day/1)## Part 1Not much going on in part one. We need to extract the digits in each line and then add together $10\times$ all the first digits and all the last digits.```{python}data = [[int(char) for char in line if char in"0123456879"] for line in load(1)]10*sum(x[0] for x in data) +sum(x[-1] for x in data)```## Part 2For part two, we need to work with the string representation of the numbers. The examples show that the numbers can overlap, so we want a string like "fiveight" to show a "5" first and then an "8".We are only interested in the first and last digits of the string, so this could be done using a sliding window. Or we could hack it by padding the string representation of the number and doing a search and replace:```{python}number_names = [ ("one", "one1one"), ("two", "two2two"), ("three", "three3three"), ("four", "four4four"), ("five", "five5five"), ("six", "six6six"), ("seven", "seven7seven"), ("eight", "eight8eight"), ("nine", "nine9nine"),]data = load(1, "raw")for pair in number_names: data = data.replace(pair[0], pair[1])data = data.split("\n")[:-1]data = [[int(char) for char in line if char in"0123456879"] for line in data]10*sum(x[0] for x in data) +sum(x[-1] for x in data)```# [Day 2: Cube Conundrum](https://adventofcode.com/2023/day/2)## Part 1The most fiddly task in part 1 is parsing the input. Each line is a single game, with the game id appearing first and then the game outcome, separated by a colon. Each game consists of multiple rounds (delimited by a semicolon) and each round reveals multiple colours (delimited by commas).So we split on the colon to separate the game id from the outcome, then split the outcome on semicolons to get each round, and finally split each round on commas to find the colours. A bit of regex let's us finally get to a list of three integers as the representation of a round.Once we have that, we can compare each round in a game with the maximum number of `[red, green, blue]`{.verbatim} cubes available and see if it is possible. A game fails if any single round is impossible.```{python}def parse_game_round(game_round): colormap = {"red": 0, "green": 1, "blue": 2} regex =r"([0-9]*) (red|green|blue)" result = [0, 0, 0]for v, c in [re.search(regex, entry).groups() for entry in game_round.split(",")]: result[colormap[c]] =int(v)return resulttotal =0games = [ np.array([parse_game_round(x) for x in line.split(":")[1].split(";")])for line in load(2)]for idx, game inenumerate(games):if ((game - [12, 13, 14]) <=0).all(): total += idx +1total```## Part 2With part 1 out of the way, part 2 is trivial: we get the same representation for each game as before, and just calculate the maximum for each coordinate in any round, and then multiply those three numbers together.```{python}sum(np.product(game.max(axis=0)) for game in games)```# [Day 3: Gear Ratios](https://adventofcode.com/2023/day/3)## Part 1It seems the theme for AOC this year is "let's make things annoying to parse".We'll need to extract the values and locations of all the numbers in the grid, and then compare that with the locations of the symbols. To get the coordinates that neighbor a symbol we can use a neat convolution trick. To get the coordinates of each number we'll loop over all the lines in the grid, and use a regex to find the numbers.Once we have that, we can find the desired value by finding the set intersetion of the number-coordinates and the neighbor coordinates; if that's non-empty, we add the number to a running total.```{python}data = load(3)symbols = np.array([[(c notin"0123456789") and c !="."for c in l] for l in data])w = [[1, 1, 1], [1, 0, 1], [1, 1, 1]]neighbors = scipy.ndimage.convolve(symbols, w, mode="constant")neighbors =set(zip(*np.where(neighbors)))numbers = {}for i, row inenumerate(data):for n in re.finditer("\d+", row): numbers[frozenset((i, j) for j inrange(*n.span()))] =int(n.group())sum(numbers[key] if key & neighbors else0for key in numbers)```## Part 2After part 1, part 2 is pretty simple. We can use the same (coordinates) -\> number mapping as before, and then just loop over all locations in the grid that have a value of "\*". We find the neighbors of each star, intersect with all the numbers coordinates, and only use the ones that intersect exactly two numbers```{python}total =0offsets = np.array(list(zip(*np.where(w)))) -1for star inzip(*np.where(np.array([[c for c in l] for l in data]) =="*")): neighbors =set([tuple(x) for x in star + offsets]) values = [numbers[key] for key in numbers if key & neighbors] total += values[0] * values[1] iflen(values) ==2else0total```# [Day 4: Scratchcards](https://adventofcode.com/2023/day/4)## Part 1After the interesting parsing tasks of the last few days, today was straightforward. Part one can be boiled down to this oneliner, which I don't even think is completely illegible```{python}sum(int(2** (len(set(row[1:11]) &set(row[11:])) -1)) for row in load(4, "int"))```## Part 2I could have found some way of saving the intersections from part 1 so that I didn't have to recalculate in part two, but it's not that complicated.```{python}data = load(4, "int")counts = np.ones(len(data), dtype=int)for i, row inenumerate(data): wins =len(set(row[1:11]) &set(row[11:])) counts[i+1:i+wins+1] += counts[i]sum(counts)```# [Day 5: If You Give A Seed A Fertilizer](https://adventofcode.com/2023/day/5)## Part 1The first part is a straightforward implementation of the requirements.To parse the file, we first split on "`\n`{=latex}`\n`{=latex}" to get each of the sections separately, then for each line of each section, we extract all of the integers. These are all positive, so we can do that with the regex "(+̣)". After skipping through lines which don't contain integers, we have a sensible representation for our data.After that it's just a question of following through what happens to each initial value: for each one we scan through the rulesets in order, and when we find a rule in a ruleset that matches we convert it to the new value and move on. If we don't find a rule that matches we're told that the converted value is the same as the original one.```{python}[seeds], *rules = [ [ [int(x) for x in ints]for line in groups.split("\n")if (ints := re.findall("(\d+)", line)) ]for groups in load(5, "raw").split("\n\n")]minval = np.inffor seed in seeds: current = seedfor ruleset in rules:for destination, source, length in ruleset:if source <= current < source + length: current = current + destination - sourcebreakif current < minval: minval = currentminval```## Part 2For part two we need to be a bit cleverer. We know that each rule converts a specific source range to a specific destination range. So to apply a rule to an arbitrary range, we split the range into three: The parts of the range before the rule applies, the parts of the range that intersect the rule and the parts of the range after the rule. Some of these parts can be empty, but that's OK.From there, building a routine to iteratively apply each ruleset to the original ranges is not too tricky.```{python}def split_range(r, rule):return [ x if x[0] < x[1] else []for x in [ (r[0], min(rule[0], r[1])), (max(r[0], rule[0]), min(r[1], rule[1])), (max(rule[1], r[0]), r[1]), ] ]def split_ranges(ranges, rule): dest, src, length = rule done, todo = [], []for l, m, r in [split_range(r, [src, src + length]) for r in ranges]: todo += [l] if l else [] todo += [r] if r else [] done += [(m[0] + dest - src, m[1] + dest - src)] if m else []return done, todoranges = [(start, start + l) for start, l in more_itertools.chunked(seeds, 2)]for ruleset in rules: todo = ranges ranges = []for rule in ruleset: new_ranges, todo = split_ranges(todo, rule) ranges += new_ranges ranges += todomin(x[0] for x in ranges)```# [Day 6: Wait For It](https://adventofcode.com/2023/day/6)## Part 1We can set up an equation for how far the boat will move in a given time, $t$ when waiting for a given period $w$ at the start, to wit $$d(t, w) = w(t-w)$$We are interested in which values of $w$ give $d(t_0, w) > d_0$, for the $d_0, t_0$ pairs we are given in the input, which is the same as exploring when the parabola described by $-w^2 +wt_0 - d_0$ is positive. This parabola has a maximum at $w = \frac{t_0}{2}$, and it's positive region (if any) will lie between the two roots. The roots are given by$$w_{1,2} = \frac{t_0 \mp \sqrt{t_0^2 - 4d_0^2}}{2};$$and for each $(d_0, t_0)$ pair we are interested in how many integers lie in the open interval $(w_1, w_2)$```{python}ts, ds = np.array(load(6, "int"))Δs = np.sqrt(ts**2-4* ds)np.prod(np.floor(ts /2+ Δs /2-1e-10) - np.ceil(ts /2- Δs /2+1e-10) +1)```## Part 2For part 2, we don't need to change anything.```{python}t =int("".join([str(x) for x in ts]))d =int("".join([str(x) for x in ds]))Δ = np.sqrt(t**2-4* d)np.floor(t /2+ Δ /2-1e-10) - np.ceil(t /2- Δ /2+1e-10) +1```# [Day 7: Camel Cards](https://adventofcode.com/2023/day/7)## Part 1This feels doable. The key is to find a method to compare two hands of cards. We can use `np.unique`{.verbatim} to get the count of how many times each unique value appears in the hand, which is almost exactly what we need. If we sort this count, then two hands will compare correctly if we compare their count tuples, since tuples sort lexicographically. The final comparator is `[counts, [card_value for card in hand]]`{.verbatim}, to correctly sort hands of the same type but with different values.```{python}def counts(hand, part=1): hand = [x for x in hand if x !="J"] if part ==2else hand _, counts = np.unique([x for x in hand], return_counts=True) counts =sorted(counts, reverse=True) if hand else [0] counts[0] +=5-len(hand)return countsdata = [x.strip().split() for x in load(7)]order =sorted( data, key=lambda row: [counts(row[0]), ["23456789TJQKA".index(c) for c in row[0]]])sum([int(x[1]) for x in order] * np.arange(1, 1+len(order)))```## Part 2Part 2 was similar enough to part 1 that I just made a flag in the `counts`{.verbatim} function and changed the order of the card values```{python}order =sorted( data, key=lambda row: [counts(row[0], part=2), ["J23456789TQKA".index(c) for c in row[0]]],)sum([int(x[1]) for x in order] * np.arange(1, 1+len(order)))```# [Day 8: Haunted Wasteland](https://adventofcode.com/2023/day/8)## Part 1For part 1 we build a dictionary of left, right instructions for each node, which makes following a path from start to end easy.```{python}instructions, lines = load(8, "raw").split("\n\n")instructions = instructions.strip()data = [words for line in lines.split("\n") if (words := re.findall("[A-Z]+", line))]nodes = {node: {"L": left, "R": right} for node, left, right in data}node ="AAA"i =0while node !="ZZZ": node = nodes[node][instructions[i %len(instructions)]] i +=1i```## Part 2Part 2 screams cycle checking, and indeed it is. The state at any given time is given by the current node and the index of the instruction list. If we ever see the same state twice, we know we're in a cycle, and can figure out the period. All of the cycles turn out to have periods that match the offset from the start, so we can just use the `lcm`{.verbatim} to find the common period. If some of the cycles had had a different offset, we would have need the full chinese remainder theorem.```{python}import mathi =0periods = []def find_cycle(node): seen = {} i =0 z =Nonewhile (node, i %len(instructions)) notin seen: seen[node, i %len(instructions)] = i node = nodes[node][instructions[i %len(instructions)]] i +=1if node[-1] =="Z": z = i period = i - seen[(node, i %len(instructions))]return period, z % periodperiods, congruences =list(zip(*[find_cycle(node) for node in nodes if node[-1] =="A"]))math.lcm(*periods)```# [Day 9: Mirage Maintenance](https://adventofcode.com/2023/day/9)## Part 1This one's pretty straightforward: calculate all the needed differences; add the last element of each difference to the calculation.```{python}def score(line, part=1): total =0if part ==2: line = line[::-1]whileany(line): total += line[-1] line = [line[i] - line[i -1] for i inrange(1, len(line))]return totalsum(score(line) for line in load(9, "int"))```## Part 2…and part 2 is simple enough that it can be included in part 1 with a flag```{python}sum(score(line, part=2) for line in load(9, "int"))```# [Day 10: Pipe Maze](https://adventofcode.com/2023/day/10)## Part 1For part 1 we find the two directions leading away from the starting point, and follow the path along each one at the same pace. The point where they overlap is the point furthest away from the start, so we return that.```{python}data = np.pad( np.array([[char for char in line.strip()] for line in load(10)]),1, constant_values=".",)connections = {"-": [(0, -1), (0, 1)],"|": [(-1, 0), (1, 0)],"L": [(-1, 0), (0, 1)],"J": [(-1, 0), (0, -1)],"7": [(1, 0), (0, -1)],"F": [(1, 0), (0, 1)],".": [],}Δs = np.array([[1, 0], [0, 1], [-1, 0], [0, -1]])point = np.array(next(zip(*np.where(data =="S"))))(lx, lv), (rx, rv) = [ (point + Δ, Δ) for Δ in Δs iftuple(-Δ) in connections[data[tuple(point + Δ)]]]def update(point, direction): options = connections[data[tuple(point)]] new_direction = np.array( options[1] iftuple(-direction) == options[0] else options[0] )return point + new_direction, new_directioni =1left_path = [point, lx]right_path = [rx]whilenot np.allclose(lx, rx): lx, lv = update(lx, lv) left_path.append(lx) rx, rv = update(rx, rv) right_path.append(rx) i +=1i```## Part 2For part 2, we need some way of distinguishing points inside from outside the circuit. Since the lines that make up the boundary of the circuit never cross, this is a point in polygon problem. We could solve it by raytracing: for every point in the polygon we can draw all rays to the outside edge and see if they cross the boundary of the polygon an odd number of times. If they do, the point is inside the polygon. We could also look at the winding number of the polygon with respect the the point: points inside will have a nonzero winding number, while points outside will have a positive winding number.Ultimately, what I ended up doing was just blowing up the grid to double size, flood filling the outside and looking at the even coordinate values of whatever was left. It's stupid, but it works.```{python}ys, xs =zip(*(left_path + right_path[:-1][::-1]))dy, dx = np.diff([ys + (ys[0],), xs + (xs[0],)], axis=1)board = np.ones((data.shape[0] *2, data.shape[1] *2))ys, xs =map(np.array, [ys, xs])board[2* ys, 2* xs] =0board[2* ys + dy, 2* xs + dx] =0board = np.pad(board[1:-1, 1:-1], 1, constant_values=0)points = deque([(1, 1)])while points: point = points.popleft()if board[point] ==0:continue board[point] =0for Δ in Δs: nb =tuple(Δ + point)if board[nb]: points.append(nb)int(board[::2, ::2].sum())```# [Day 11: Cosmic Expansion](https://adventofcode.com/2023/day/11)## Part 1I liked this puzzle, and I feel that I managed to come up with an OK slick array solution. We can get the coordinates of the original galaxies and the empty rows using `np.where`{.verbatim}. For each empty row we can increase the first coordinate of the galaxies below it by some amount, and, mutatis mutandis, we can do the same for the empty columns.That gives us a set of new coordinates, and we need to find the sum of all the manhattan distances from one point to the others. for any pair of points $i, j$, that's $\left|x_i - x_j\right| +\left|y_i - y_j\right|$; we can construct the entire matrix by taking the row vector of coordinates, and subtracting from it the column vector of the same coordinates and relying on `numpy`{.verbatim}'s broadcasting magic.```{python}def solve(s=1): y, x = np.where(data =="#") empty_r = [i for i inrange(len(data)) ifall(data[i] ==".")] empty_c = [i for i inrange(len(data)) ifall(data[:, i] ==".")] new_y = y + s * np.array([y > empty_r[i] for i inrange(len(empty_r))]).sum(axis=0) new_x = x + s * np.array([x > empty_c[i] for i inrange(len(empty_c))]).sum(axis=0)return (abs(new_y - new_y.reshape(-1, 1)) +abs(new_x - new_x.reshape(-1, 1)) ).sum() //2data = load(11, "chararray")solve()```## Part 2The second part is pretty trivially included in the first```{python}solve(999_999)```# [Day 12: Hot Springs](https://adventofcode.com/2023/day/12)## Part 1The core of this solution is the `count`{.verbatim} function, which takes a tuple of ints representing the three states (off, ambiguous and on), as well as a tuple of block lengths, and returns the number of assignments of the ambiguous values that work.It's recursive, with the following base cases; the third is checked last:- If the number of on values is more than the sum of block lengths, no assignments are possible- If the sum of the number of on values and ambiguous values is less than the sum of block lengths, no assignments are possible- If the sum of block lengths is zero, exactly one assignment is possibleOtherwise, if the first character is off, then the count is the same as the count ignoring that assignment and we can recurse.If the first character is on, we can check whether the first `l`{.verbatim} characters would fit the first block, and the `l+1`{.verbatim}'th character is either the end of the string or compatible with an off state. If it is, the count is the same as the count for the remainder of the string on the remainder of the blocks and we can recurse.Finally, if the first character is ambiguous, the count is the sum of the counts for the two possible assignments of the character, and we can recurse.```{python}def parse(line): s, groups = line.strip().split(" ") lookup = {"#": 2, "?": 1, ".": 0}returntuple(lookup[char] for char in s), tuple(int(g) for g in groups.split(","))data = [parse(x) for x in load(12)]def match_beginning(data, length):returnall(x >0for x in data[:length]) and ( (len(data) == length) or data[length] <2 )@functools.cachedef count(data, blocks): total =sum(blocks) minimum =sum(x ==2for x in data) maximum =sum(x >0for x in data)if minimum > total or maximum < total:return0if total ==0:return1if data[0] ==0:return count(data[1:], blocks)if data[0] ==2: l = blocks[0]if match_beginning(data, l):if l ==len(data):return1return count(data[l +1 :], blocks[1:])return0return count(data[1:], blocks) + count((2,) + data[1:], blocks)sum(count(*line) for line in data)```## Part 2With the memoization added to part 1, part 2 runs in 8s with no changes needed. Not great, but not terrible```{python}sum(count(((chars + (1,)) *5)[:-1], blocks *5) for chars, blocks in data)```# [Day 13: Point of Incidence](https://adventofcode.com/2023/day/13)## Part 1This wasn't too tricky. The idea is that we test all horizontal lines of reflection to see if there are any that match the given condition; if none are found, we rotate the array by 90 degrees clockwise and try again. For part 1, the test is that the two halves should line up exactly after flipping.The only bit that requires some thought is how to account for the points beyond the top/bottom edge. We do that by saying that the number of lines on either side of the mirror line is the shortest distance to the top/bottom edge, so that only relevant lines are compared.```{python}def find_reflection(array, part=1):if part ==1: test =lambda a, b: (a == b[::-1]).all()else: test =lambda a, b: (a != b[::-1]).sum() ==1for i inrange(1, len(array)): l =min(len(array) - i, i)if test(array[i - l : i], array[i : i + l]):return ireturnNonearrays = [ np.array([[char for char in line.strip()] for line in array.split("\n")])for array in load(13, "raw").split("\n\n")]sum(100* yif (y := find_reflection(array)) isnotNoneelse find_reflection(np.rot90(array, -1))for array in arrays)```## Part 2Part 2 is so similar to part 1 that we can include it as a flag there; instead of a perfect match, the test is that exactly one pair of elements should be different on the two sides of the mirror line. Conceptually, that means that the sum of the differences should be exactly 1.```{python}sum(100* yif (y := find_reflection(array, part=2)) isnotNoneelse find_reflection(np.rot90(array, -1), part=2)for array in arrays)```# [Day 14: Parabolic Reflector Dish](https://adventofcode.com/2023/day/14)## Part 1Part 1 needs a bit of thought. I'll represent the data as a `numpy`{.verbatim} array, with -1 corresponding to unmoveable rock, 0 to rolling rock, and 1 to empty space. To roll all the rocks northwards, we should focus one column at a time, and between every pair of unmoveable rocks, we sort the intervening data.Once that's done, we can just score the whole array```{python}map_ = {"#": -1, "O": 0, ".": 1}array = np.array([[map_[char] for char in line] for line in load(14)])nrows, ncols = array.shapedef score(array): rolls = np.where(array ==0)[0]return (nrows - rolls).sum()def roll(array):for i inrange(ncols): rocks = [-1] +list(np.where(array[:, i] ==-1)[0]) + [None]for j inrange(len(rocks) -1): left, right = rocks[j] +1, rocks[j +1] array[left:right, i] = np.sort(array[left:right, i])return arrayscore(roll(array))```## Part 2The numbers in part 2 are ridiculous enough that we obviously have to hope for some pattern in how the rocks move. We'll store a fingerprint of the current state, and after each cycle, check if we're in a state we've seen before. If we are, we've found a cycle and can skip straight to the end.```{python}def cycle(array):for i inrange(4): array = roll(array) array = np.rot90(array, -1)return arraydef hash_(array):returntuple(array.ravel())seen, scores = {}, {}maxval =1_000_000_000for i inrange(maxval): h = hash_(array)if h in seen:break seen[h] = i scores[i] = score(array) array = cycle(array)cycle_length = i - seen[h]index = seen[h] + (maxval - seen[h]) % cycle_lengthscores[index]```# [Day 15: Lens Library](https://adventofcode.com/2023/day/15)## Part 1Part 1 can be done with a single expression. Always nice when that happens. I originally had both the hash function and the data loading directly in the sum generator expression, but I needed them for part two so I pulled them out to their own lines.```{python}def hash_(s):return functools.reduce(lambda x, y: (17* (x +ord(y))) %256, s, 0)instructions = load(15, "raw").split(",")sum(hash_(i) for i in instructions)```## Part 2Part 2 is fiddly and less fun. We need to run through each of the instructions and apply the procedure described. There might be better ways than this, but the below works:```{python}boxes = [{} for i inrange(256)]for instruction in instructions: label, f = instruction.replace("-", "=").split("=") destination = hash_(label)if"="in instruction: boxes[destination][label] =int(f)elif label in boxes[destination]:del boxes[destination][label]def score(box):return (list(box.values()) * np.arange(1, len(box) +1)).sum()int(sum(np.arange(1, 257) * [score(box) for box in boxes]))```# [Day 16: The Floor Will Be Lava](https://adventofcode.com/2023/day/16)## Part 1Part 1 is fairly straightforward. We'll need a way of tracking states we've already seen, and a recipe for moving from one state to the next. A state consists of a (position, direction) pair; if we ever hit a position and direction we've seen before we know we're not going to do anything new (and that there's an infinite loop in the light circuit).We'll store the grid as a dictionary of coordinates -\> value, with the x and y coordinates encoded as a single complex number. That makes checking for when we've left the edge of the grid easy; we just have to check if the current coordinates are in the dictionary.The diagonal mirrors transpose the coordinates of our direction, so that horizontal movement becomes vertical and vice versa. The beam splitters force us into vertical/horizontal movement and make us add an extra beam to the queue we're going through.```{python}data = load(16, "chararray")grid = {1j* y + x: data[y, x] for x inrange(data.shape[1]) for y inrange(data.shape[0])}def count_points(position, direction): positions = deque([(position, direction)]) seen =set()while positions: position, direction = positions.popleft()while position in grid:if (position, direction) in seen:break seen.add((position, direction)) char = grid[position]if char in"/\\": direction =int(direction.imag) +1j*int(direction.real) direction *=-1if char =="/"else1elif char =="-"and direction.imag: positions.append((position -1, -1)) direction =1elif char =="|"and direction.real: positions.append((position -1j, -1j)) direction =1j position += directionreturnlen(set(x[0] for x in seen))count_points(0, 1)```## Part 2For part two we could probably do some clever memoization by making the above function recurse on beam splitters.One potential disadvantage of that is that the input might contain infinite loops of light which require global information to be discovered. Passing this information to the memoized function would mean that we almost never get a cache hit, while not passing it risks getting stuck in an infinite loop.Finally, brute force runs in an acceptable amount of time:```{python}def starting_position(direction, length): offset = data.shape[0] if (direction.imag + direction.real) <0else0 offset *=1jif direction.imag else1return offset + (1jif direction.real else1) * lengthmax( count_points(starting_position(direction, x), direction)for x inrange(data.shape[0])for direction in (-1, 1, -1j, 1j))```# [Day 17: Clumsy Crucible](https://adventofcode.com/2023/day/17)## Part 1Our first pathfinding task! I couldn't think of a good & simple-to-calculate heuristic, so we'll just go with Dijkstra instead of A\*.We're given the restriction that a cart must turn after at most three moves. The cost for a move can just be read out from the input grid, so the only question is how we represent a state and what the neighbors of each state are.With the turning restriction, it makes sense to focus on where the cart turns, rather than where it moves. This means that a state can be represented by a tuple of `(y, x, direction)`{.verbatim}, where direction is 0 if the cart is moving vertically after the turn and 1 if it's moving horizontally. The neighbors of a state are then the places where the cart could have its next turn, i.e. the points up to three tiles away vertically or horizontally, with movement along the other axis after the turn.Implementing it looks like this:```{python}def navigate(grid, minval=1, maxval=3): q = PriorityQueue() max_y, max_x = (v -1for v in grid.shape) goal = max_y, max_x q.put((0, (0, 0, 0))) q.put((0, (0, 0, 1))) seen =set()while q: cost, (y, x, direction) = q.get()if (y, x) == goal:breakif (y, x, direction) in seen:continue seen.add((y, x, direction)) original_cost = costfor s in [-1, 1]: cost = original_cost new_y, new_x = y, xfor i inrange(1, maxval +1):if direction ==1: new_x = x + i * selse: new_y = y + i * sif new_x <0or new_y <0or new_x > max_x or new_y > max_y:break cost += grid[new_y, new_x]if ((new_y, new_x, 1- direction)) in seen:continueif i >= minval: q.put((cost, (new_y, new_x, 1- direction)))return costgrid = np.array([[int(char) for char in line.strip()] for line in load(17)])navigate(grid)```## Part 2Part 2 can be incorporated into the above code by changing the maximum number of allowed moves before a turn, and by only allowing turns after the crucible has moved at least `minval`{.verbatim} steps. That's easy to incorporate into the function for part 1, so the code just looks like:```{python}navigate(grid, 4, 10)```# [Day 18: Lavaduct Lagoon](https://adventofcode.com/2023/day/18)## Part 1We'll run through the list of instructions and make a list of the coordinates of all the vertices. Then we'll use the shoelace formula to calculate the area of the described polygon. To account for the line of paint at the edge of the polygon, we'll add half the perimeter of the polygon (which corresponds to increasing the total thickness of the polygon by 1 everywhere), and finally add one to correct for the missing paint in the exterior corners. There will always be four more exterior corners than interior corners, since going all the way around the polygon once is a 360 degree rotation.```{python}ys, xs = [0], [0]position = np.array([0, 0])directions = {"U": [0, 1], "R": [1, 0], "D": [0, -1], "L": [-1, 0]}for direction, count, _ inmap(str.split, load(18)): position += np.array(directions[direction]) * (int(count)) xs.append(position[0]) ys.append(position[1])int( ((xs * (np.roll(ys, 1) - np.roll(ys, -1))).sum()) /2+sum(abs(np.diff(ys)) +abs(np.diff(xs))) /2+1)```## Part 2I was not expecting part 2 to go there! Luckily, nothing changes with respect to what we did above apart from how we build the path around the polygon```{python}ys, xs = [0], [0]position = np.array([0, 0])order ="RDLU"for real_instruction inmap(lambda x: x.split()[-1][2:-1], load(18)): count, direction = real_instruction[:-1], real_instruction[-1] position += np.array(directions[order[int(direction)]]) *int(count, 16) xs.append(position[0]) ys.append(position[1])int( ((xs * (np.roll(ys, 1) - np.roll(ys, -1))).sum()) /2+sum(abs(np.diff(ys)) +abs(np.diff(xs))) /2+1)```# [Day 19: Aplenty](https://adventofcode.com/2023/day/19)## Part 1```{python}workflows, parts = [x.split("\n") for x in load(19, "raw").split("\n\n")]Part = namedtuple("Part", "x m a s")parts = [Part(*[int(x) for x in re.findall(r"\+?-?\d+", part)]) for part in parts]name, instructions = workflows[0][:-1].split("{")instruction_map = {}for name, instructions inmap(lambda x: x[:-1].split("{"), workflows): rules = [ rule.split(":") if":"in rule else [lambda y: True, rule]for rule in instructions.split(",") ] rules = [ (eval("lambda y: y."+ rule[0]), rule[1]) ifisinstance(rule[0], str) else rulefor rule in rules ] instruction_map[name] = instructions, rulesdef follow(part, state): s, tests = instruction_map[state]for test, output in tests:if test(part):if output =="A":returnTrueif output =="R":returnFalsereturn follow(part, output)sum(x.x + x.m + x.a + x.s for x in parts if follow(x, "in"))```## Part 2So, I thought I was being clever above by `eval`{.verbatim}'ing the instructions to convert them directly to python functions. Unfortunately we have to modify the above approach to work with ranges of values rather than specific values, which has the effect of converting the filters from pass/faill to ones that split ranges in two - the parts that pass the filter, and the parts that don't. I could go back and rework how I parsed the instructions in part 1 to fit with what I need to do for part 2, or I could just redo the parsing.I'll store each range as a dataclass of tuples, and each filter splits a range into at most two parts. The are half-open on the left, so that x = tuple(0, 5) represents the values \[1, 2, 3, 4, 5\]. The fact that they're half-open lets us find the size of a range by just subtracting the endpoints, without having to fiddle with off-by one errors, and I chose to do it on the left so that the inital range can be represented by the pair of numbers (0, 4000) rather than the more ugly (1, 4001).```{python}@dataclasses.dataclassclass PartRange: x: tuple[int] m: tuple[int] a: tuple[int] s: tuple[int]def is_valid(interval):return interval[1] > interval[0]def split_coords(part_range, coordinate, cutoff, test): coord_range =getattr(part_range, coordinate)if test ==">": new_coords = ((cutoff, coord_range[1]), (coord_range[0], cutoff))elif test =="<": new_coords = ((coord_range[0], cutoff -1), (cutoff -1, coord_range[1]))else:raiseValueError("This shouldn't be possible") result = []for interval in new_coords:ifnot is_valid(interval): result.append(None)else: result.append(dataclasses.replace(part_range, **{coordinate: interval}))return resultdef size(part_range):return np.product([np.diff(getattr(part_range, i)) for i in"xmas"])def parse_rule_string(rule):if":"notin rule:return rule coordinate = rule[0] test = rule[1] cutoff, destination = rule[2:].split(":")return coordinate, int(cutoff), test, destinationinstructions = {}for name, rules inmap(lambda x: x[:-1].split("{"), workflows): instructions[name] = [parse_rule_string(rule) for rule in rules.split(",")]def follow(instructions):def inner(part_range, state):if state =="R":return0if state =="A":return size(part_range) rules = instructions[state] result =0 remainder = part_rangefor rule in rules:if remainder isNone:breakifisinstance(rule, str): result += inner(remainder, rule)breakelse: coordinate, cutoff, test, destination = rule passed, remainder = split_coords(remainder, coordinate, cutoff, test) result += inner(passed, destination) if passed isnotNoneelse0return result initial = PartRange((0, 4000), (0, 4000), (0, 4000), (0, 4000))return inner(initial, "in")follow(instructions)```# [Day 20: Pulse Propagation](https://adventofcode.com/2023/day/20)```{python}data = load(20)bus = []class Module:def__init__(self, name, destinations):self.name = nameself.destinations = destinationsdef send(self):for destination inself.destinations: bus.append((self.name, destination, self.state))def instantiate_destinations(self): destinations = []for dest inself.destinations:ifisinstance(dest, str):try: dest = modules[dest]exceptKeyError: dest = Module(dest, []) destinations.append(dest)self.destinations = destinationsdef recv(self, source, signal):passdef reset(self):passclass FlipFlop(Module):def__init__(self, name, destinations):super().__init__(name, destinations)self.state =0def recv(self, source, signal):if signal ==1:passelse:self.state ^=1self.send()def reset(self):self.state =0class Conjunction(Module):def__init__(self, name, destinations, sources):super().__init__(name, destinations)self.signals = {x: 0for x in sources}@propertydef state(self):returnint(notall(self.signals.values()))def recv(self, source, signal):self.signals[source] = signalself.send()def reset(self):self.signals = {x: 0for x inself.signals}class Broadcaster(Module):def recv(self, source, signal):self.state = signalself.send()def strip_prefix(name):return name[1:] if name[0] in"%&"else namenetwork = {}modules = {}for source, destinations inmap(lambda x: x.split(" -> "), data): network[source] = destinations.split(", ")for key in network.keys():if key[0] =="%": modules[key[1:]] = FlipFlop(key[1:], network[key])elif key[0] =="&": sources = [strip_prefix(k) for k in network if key[1:] in network[k]] modules[key[1:]] = Conjunction(key[1:], network[key], sources)elif key =="broadcaster": modules[key] = Broadcaster(key, network[key])for key in modules: modules[key].instantiate_destinations()``````{python}counts = [0, 0]for _ inrange(1000): bus = [("button", modules["broadcaster"], 0)]while bus: source, destination, signal = bus.pop(0) counts[signal] +=1 destination.recv(source, signal)counts[0] * counts[1]```## Part 2```{python}# for module in modules:# modules[module].reset()bus = [("button", modules["broadcaster"], 0)]while bus: source, destination, signal = bus.pop(0)print(source, destination.name, signal, "conjunction"ifhasattr(destination, "signals") else"flipflop") destination.recv(source, signal)``````{python}import matplotlib.pyplot as pltG = nx.from_dict_of_lists({strip_prefix(k) : v for k, v in network.items()}, create_using=nx.DiGraph)nodes, prefices =zip(*([(strip_prefix(k), k[0]) for k in network] + [("rx", "d")]))colormap = {"%": "#1f77b4", "&": "#ff7f0e", "b": "#2ca02c", "d": "#d62728"}colors = [colormap[prefix] for prefix in prefices]nx.draw_networkx(G, nodelist=nodes, node_color=colors, arrows=True, pos=nx.bfs_layout(G))plt.tight_layout()plt.axis("off")``````{python}import matplotlibfor c in plt.cm.tab10.colors: print(matplotlib.colors.to_hex(c))```# [Day 21: Step Counter](https://adventofcode.com/2023/day/21)## Part 1```{python}data = np.array([list(line) for line in load(21)])data = np.pad(data, 1, constant_values ="#")y, x = [x[0] for x in np.where(data =="S")]data[y, x] ="."def step(positions): result =set()for y, x in positions:for dy in [-1, 1]:if data[y + dy, x] ==".": result.add((y + dy, x))for dx in [-1, 1]:if data[y, x + dx] ==".": result.add((y, x + dx))return resultpositions = [(y, x)]for _ inrange(64): positions = step(positions)len(positions)```