Skip to main content
  1. blogs/

Advent of Code 2025

·
aoc
subzcuber
Author
subzcuber
i like to imagine i’m funny
Table of Contents

I did Advent of Code in the winter vacations!

advent calendar

Day 1: Secret Entrace
#

Part A
#

We have to first count the number of times the dial points to 0

1a.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
while (input >> dir >> distance) {
  distance = distance % 100;
  if (dir == 'L') {
    count = (count - distance + 100) % 100;
  } else if (dir == 'R') {
    count = (count + distance + 100) % 100;
  }
  if (count == 0)
    password++;
}
cout << password << endl;

Part B
#

and in part 2 we count the total number of times it passes through 0

1b.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
while (input >> dir >> distance) {
  int q = distance / 100;
  int r = distance % 100;
  old_count = count;
  password += q;
  if (dir == 'L') {
    count = count - r;
    if (count <= 0 && old_count) {
      password++;
    }
  } else if (dir == 'R') {
    count = count + r;
    if (count >= 100 && old_count) {
      password++;
    }
  }
  count = (count + 100) % 100;
}
cout << password << endl;

Day 2: Gift Shop
#

Find the invalid IDs in a range of IDs

Part A
#

Invalid IDs are those with with some sequence of digits repeated twice eg 6464, 123123 etc

2a.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
def digits(number: int) -> int:
    return len(str(abs(number)))

while begin <= end:
    nodigits = digits(begin)
    if nodigits % 2 == 0:
        fac = 10 ** (nodigits // 2)
        low = begin % fac
        high = begin // fac
        if low == high:
            count = count + 1
            sum += begin
    begin = begin + 1

Here we split the number into 2 and compare the two halves

Part B
#

Now an invalid ID is one that is repeated at least twice, so 111 is also invalid now though it wouldn’t have been earlier

2b.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def repeat_check(num: int, digit: int) -> bool:
    no_digits = digits(num)
    parts = []
    if no_digits % digit == 0:
        fac = 10 ** (no_digits // digit)
        while num > 0:
            parts.append(num % fac)
            num = num // fac
        for i, part in enumerate(parts[:-1]):
            if part != parts[i+1]:
                return False
        return True
    else:
        return False

with open("./input", "r") as INPUT:
    data = INPUT.read().strip().split(",")
    ranges = []
    sum = 0
    for i in data:
        ranges.append(i.split("-"))
    for thing in ranges:
        begin = int(thing[0])
        end = int(thing[1])

        while begin <= end:
            for i in range(2, digits(begin) + 1):
                if repeat_check(begin, i):
                    sum += begin
                    break
            begin = begin + 1

Now instead of splitting it in 2, we create a splitter function that checks if 1 digit is repeated throughout, then 2 digits, and so on (in the reverse order actually but ok)

Day 3: Lobby
#

Given long strings of “battery banks” each with a single digit “joltage”, find a way to get the required joltage.

Part A
#

Just had to get the largest two digit subsequence

811111111111119 gives 89

3a.js
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
let max_digit = 0;
let max_idx;
for (let j = 0; j < line.length - 1; j++) {
    if (line[j] > max_digit) {
    max_digit = line[j];
    max_idx = j;
    } 
} 
total_joltage += parseInt(max_digit * 10);
max_digit = 0;
for (let j = max_idx + 1; j < line.length; j++) {
    if (line[j] > max_digit) {
    max_digit = line[j];
    } 
}
total_joltage += parseInt(max_digit);

Find the maximum digit before the last one, then find the next maximum digit after that.

Part B
#

Apparently more power is needed and now you need to select the largest 12 digit subsequence

3b.js
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function find_max(line, start, to_leave) {
  let max_digit = 0;
  let max_idx;
  for (let j = start; j < line.length - to_leave; j++) {
    if (line[j] > max_digit) {
      max_digit = line[j];
      max_idx = j;
    } 
  } 
  return [max_digit, max_idx];
}

const data = fs.readFileSync('./input', 'utf8');
const lines = data.split('\n');
let total_joltage = 0;
for (let i = 0; i < lines.length - 1; i++) {
const line = lines[i];
let start = 0;
let j = 12;
while (j--) {
    let [dig, tmp] = find_max(line, start, j);
    total_joltage += parseInt(dig * (10 ** j));
    start = tmp + 1;
} 

We apply the same idea but as earlier create a helper function and check all but the last 11 digits, then all but the 10 and so on

Day 4: Printing Department
#

This is where things start getting a bit more interesting. Given a grid of “paper rolls” we need to check if they are accessible by a forklist by counting the number of neighbours. An example graph

..@@.@@@@.
@@@.@.@.@@
@@@@@.@.@@
@.@@@@..@.
@@.@@@@.@@
.@@@@@@@.@
.@.@.@.@@@
@.@@@.@@@@
.@@@@@@@@.
@.@.@@@.@.

Part A
#

First we tell them how many rolls are accessibile. A roll is accessible if it fewer than 4 neighbours

4a.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
long long result{0};
int column{1};
vector<int> v(grid[0].length(), 0);
// init q to 3 empty rows
vector<vector<int>> q(3, v);

update_queue(q, grid[0], column);
// pop unnecessary top row
q.erase(q.begin());
q.emplace_back(vector<int>(grid[0].length(), 0));

for (size_t i = 1; i < grid.size(); i++) {
    update_queue(q, grid[i], column);
    result += check_row(q[column - 1], grid[i - 1]);
    column++;
    q.emplace_back(vector<int>(grid[0].length(), 0));
}

result += check_row(q[column - 1], grid[grid.size() - 1]);

Since we only need neighbours, there is no need to work on the whole graph, I maintain a sliding window q of 3 rows. Each cell in q stores the number of neighbours of that cell

update_queue() is a function that takes a line of the input graph, and based on wherever there is a paper roll, increments the number of neighbours for each position adjacent to it

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
/* increment all 8 neighbours of idx i */
void update_queue(vector<vector<int>> &q, const string &row, const int &idx) {
  for (size_t i = 0; i < row.length(); i++) {
    if (row[i] == '@') {
      q[idx - 1][i]++;
      q[idx + 1][i]++;
      if (i > 0) {
        q[idx - 1][i - 1]++;
        q[idx][i - 1]++;
        q[idx + 1][i - 1]++;
      }
      if (i < row.length() - 1) {
        q[idx - 1][i + 1]++;
        q[idx][i + 1]++;
        q[idx + 1][i + 1]++;
      }
    }
  }
}

Then check_row() can simply look at all the paper roll cells and count the ones with value less than 4.

Part B
#

Now we need to do the same again, but until there are no more accessible paper rolls. i.e. if a paper roll is accessible, we remove it, and in the process free up more paper rolls. This was a slight challenge since I was using a sliding window instead of keeping the entire grid, but it ended up not mattering much

4b.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
long long count{0};
long long result{0};
do {
    result = 0;
    int column{1};
    vector<int> v(grid[0].length(), 0);
    vector<vector<int>> q(3, v);

    update_queue(q, grid[0], column);
    // pop unnecessary top row
    q.erase(q.begin());
    q.emplace_back(vector<int>(grid[0].length(), 0));

    for (size_t i = 1; i < grid.size(); i++) {
        update_queue(q, grid[i], column);
        result += check_row(q[column - 1], grid[i - 1]);
        column++;
        q.emplace_back(vector<int>(grid[0].length(), 0));
    }

    result += check_row(q[column - 1], grid[grid.size() - 1]);
    count += result;
} while (result != 0);

We can now store a per-iteration result and a global count, and we iterate while we still have accessible rolls, i.e. result != 0. Also every time we check a row, we remove the accessible rolls from the grid to free up more rows in the next iteration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
int check_row(const vector<int> &line, string &prev_row) {
  int count{0};
  for (size_t i = 0; i < line.size(); i++) {
    if (prev_row[i] == '@' && line[i] < 4) {
      count++;
      prev_row[i] = '.';
    }
  }
  return count;
}

Day 5: Cafetaria
#

Given intervals of fresh ids, and a list of ids

Part A
#

Find the number of IDs that fall in the fresh intervals

5a.py
1
2
3
4
5
    for id in ids:
        for r in ranges:
            if id >= r[0] and id <= r[1]:
                ans += 1
                break

And that’s it lol

Part B
#

This is slightly more challenging, but ends up becoming easy to implement. Now we are asked to find the total number of fresh ids, which sounds easy, but there ranges are allowed to overlap in the input, and you must consider those

This is where python’s intervaltree module comes in

An interval tree is a data structure that stores intervals, and is optimised for the question, “which of these intervals overlap with a given interval”. This makes merging intervals rather easy here giving us some very short code

5b.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
import intervaltree

ranges = []
for line in fresh:
    r = list(map(int, line.strip().split('-')))
    if r[0] == r[1]:
        r[1] += 0.1
    ranges.append(r)

ranges.sort()
tree = intervaltree.IntervalTree.from_tuples(ranges)
tree.merge_overlaps(strict=False)
tree = list(tree)
tree.sort()

for interval in tree:
    ans += (interval[1] - interval[0] + 1) // 1

I add the 0.1 on line 5 because the python implementation treats intervals as [x, y) whereas we are told they are inclusive of both i.e. [x, y].

Here are some links about the interval tree

Day 6: Trash Compactor
#

Time to solve math homework

Part A
#

123
 45
  6
*  

translates to 123 * 45 * 6 = 33210

Given a long list we have to calculate the sum of all answers

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
enum OPERATION { ADD, MUL };

for (char c : input.back()) {
    switch (c) {
    case '+':
        op.push_back(ADD);
        break;
    case '*':
        op.push_back(MUL);
        break;
    default:
        break;
    }
}

vector<unsigned long long int> ans(op.size());
for (size_t i = 0; i < ans.size(); i++)
    ans[i] = static_cast<int>(op[i]);

for (auto itr = input.begin(); itr != input.end() - 1; itr++) {
    line = *itr;
    int x;
    istringstream iss{line};
    int count = 0;
    while (iss >> x) {
        if (op[count] == ADD) {
            ans[count++] += x;
        } else if (op[count] == MUL) {
            ans[count++] *= x;
        }
    }
}

I first read the operations line (the last line) and based on those init a results array. Addition problems init the answer to 0, multiplication ones to 1.

Then as we read the numbers line by line we can keep performing the required operation and storing the results! That was fairly straightforward

Part B
#

Time to make me regret my life decisions. AoC involves so much weird parsing T-T

123 
 45 
  6
*

is now interpreted as 356 * 24 * 1 = 8544

6b.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
int count{0};
    for (size_t i = 0; i < input[0].size(); i++) {
    string number{""};
    for (size_t j = 0; j < input.size() - 1; j++) {
        if (input[j][i] >= '0' && input[j][i] <= '9') {
            number += input[j][i];
        }
    }
    if (number.length()) {
        switch (op[count]) {
        case ADD:
            ans[count] += atoi(number.c_str());
            break;
        case MUL:
            ans[count] *= atoi(number.c_str());
            break;
        }
    } else {
        count++;
    }
}

Now I have to store the input as a grid of characters, parse the grid column by column and proceed. If no number is found on that column, I know my problem has ended and I move on to the next problem

Day 7: Laboratories
#

.......S.......
...............
.......^.......
...............
......^.^......
...............
.....^.^.^.....
...............
....^.^...^....
...............
...^.^...^.^...
...............
..^...^.....^..
...............
.^.^.^.^.^...^.
...............

Part A
#

Count the number of times the beam splits

7a.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
for (size_t i = 0; i < manifold.size(); i++) {       
  for (size_t j = 0; j < active_beams.size(); j++) { 
    switch (manifold[i][j]) {                        
    case 'S':                                        
      active_beams[j] = true;                        
      break;                                         
    case '^':                                        
      if (active_beams[j]) {                         
        split_count++;                               
        active_beams[j] = false;                     
        if (j != 0) {                                
          active_beams[j - 1] = true;                
        }                                            
        if (j != active_beams.size() - 1) {          
          active_beams[j + 1] = true;                
        }                                            
      }                                              
      break;                                         
    }                                                
  }                                                  
}                                                    

Easy enough to simulate, basically every time the an active beam encounters ^ you increment

Part B
#

Now you need to calculate every possible path a single beam could go through, given at every ^ it goes either left or right.

Due to the scale of the problem we use DP

  S       
00100     
  ^       
01010     
 ^ ^      
10201     

Consider the above example,

  • We start at S, and the value of 1 at S represents the paths that can pass through it.
  • Once it is split, there is no path on that exact cell, but one is created on both sides of it
  • The next time both those paths split, if the beam passes through the cell between the two ^, it could have come from either the left or the right, so two paths pass through that cell

We can represent this with a dp array where

if grid[j] == '^':
    dp[j-1] += dp[j]
    dp[j+1] += dp[j]
    dp[j] = 0

And finally the sum of all cells in dp[] gives us the total number of paths possible

7b.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
unsigned long long quantum_traverse(const vector<string> &manifold,
                                    const size_t &start) {
  vector<unsigned long long> dp(manifold[0].size(), 0);
  dp[start] = 1;
  for (size_t i = 1; i < manifold.size(); i++) {
    for (size_t j = 0; j < dp.size(); j++) {
      if (manifold[i][j] == '^') {
        dp[j - 1] += dp[j];
        dp[j + 1] += dp[j];
        dp[j] = 0;
      }
    }
  }

  unsigned long long sum{0};
  for (const auto &e : dp) {
    sum += e;
  }
  return sum;
}

Day 8: Playground
#

This was one of the wildest ones. I was using such random random algorithms from cp-algorithms. I even added a custom hash function for a 3d point 😭

You’re given a long list of (x, y, z) points

It’s honestly not too bad in terms of execution, but the code is horrible and I truly do not understand what I wrote, this will be a journey we undergo together.

You must connect the 1000 closest points, and see what circuits they make. A string of connected points forms a circuit

Let’s first handle the closest points part

Nearest Points
#

Everything mentioned here can be found in the helpful cp-algorithms page

We implement a linear time randomised algorithm to populate a priority queue with the 1000 closest points

eg

essentially we break up the 3D grid into smaller cubes, and then look in those cubes for nearest points.

First we select the dimension of each block d then iterate over points in the same block, and finally over those in adjacent blocks

Here’s my C++ implementation. Note that sometimes my hash function didn’t manage to unique hash every point so in my constructor I force it to repeat until I’m sure my hash function did it’s work and I have enough pairs in my priority queue

closestpoints.cpp
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
class ClosestPoints {
private:
  double min_dist{MAXFLOAT};
  vector<Point> pts{};
  pair<int, int> best_pair;

  struct CustomHashPoint {
    size_t operator()(const Point &p) const {
      static const unsigned long long int C =
          std::chrono::steady_clock::now().time_since_epoch().count();
      unsigned long long h{hash<ll>{}(p.x)};
      h ^= hash<ll>{}(p.y);
      h ^= hash<ll>{}(p.z);
      return C ^ h;
    }
  };

public:
  priority_queue<Pair, vector<Pair>, PairCmp> pq{};

  ClosestPoints(const vector<Point> &boxes) : pts{boxes}, best_pair({0, 0}) {
    int n = static_cast<int>(pts.size());
    // redo if less than nC2 possible pairs
    while (static_cast<int>(pq.size()) <= (n * (n - 1) / 2)) {
      closest();
    }
  };

  void closest() {
    int n = static_cast<int>(pts.size());
    assert(n >= 2);

    unordered_map<Point, int, CustomHashPoint> previous{};

    // check for duplicates
    for (int i = 0; i < static_cast<int>(pts.size()); i++) {
      auto it = previous.find(pts[i]);
      if (it != previous.end()) {
        // if found that's the best pair
        best_pair = {it->second, i};
      }
      previous[pts[i]] = i;
    }

    unordered_map<Point, vector<int>, CustomHashPoint> grid;
    grid.reserve(n);

    mt19937 rd(chrono::system_clock::now().time_since_epoch().count());
    uniform_int_distribution<int> dis(0, n - 1);

    // sample d for the cube
    double d2 = pts[0].distance_from(pts[1]);
    pair<int, int> closest = {0, 1};

    auto candidate_closest = [&](int i, int j) -> void {
      double ab2 = pts[i].distance_from(pts[j]);
      // NOTE: couldn't figure out how to handle duplicates
      pq.push({i, j, ab2});
      if (ab2 < d2) {
        // update d
        d2 = ab2;
        closest = {i, j};
      }
    };

    // check n random pairs
    for (int i = 0; i < n; i++) {
      int j = dis(rd);
      int k = dis(rd);
      while (j == k) {
        k = dis(rd);
      }
      candidate_closest(j, k);
    }

    // this is our final d for the cube
    long long d = static_cast<long long>(d2 + 1);

    for (int i = 0; i < n; i++) {
      // add point i to grid
      grid[Point(pts[i].x / d, pts[i].y / d, pts[i].z / d)].push_back(i);
    }

    // iterate over cubes in the grid
    for (const auto &it : grid) {
      int k = int(it.second.size());
      // same cube check
      for (int i = 0; i < k; i++) {
        for (int j = i + 1; j < k; j++) {
          candidate_closest(it.second[i], it.second[j]);
        }
      }
    }

    // adjacent cube check
    for (const auto &it : grid) {
      auto coord = it.first;
      for (int dx = 0; dx <= 1; dx++) {
        for (int dy = -1; dy <= 1; dy++) {
          for (int dz = -1; dz <= 1; dz++) {
            if (dx == 0 && dy == 0 && dz == 0)
              continue;
            Point neighbour(coord.x + dx, coord.y + dy, coord.z + dz);

            for (int i : it.second) {
              if (!grid.count(neighbour))
                continue;
              for (int j : grid.at(neighbour)) {
                candidate_closest(i, j);
              }
            }
          }
        }
      }
    }

    best_pair = closest;
  }
};

Disjoint Circuits
#

Once we have our required pairs, we need to start making connections and count the number of disjoint circuits formed. We can use the very cute Union-Find data structure for this! (aka Disjoint Set)

uf

unionfind.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class UnionFind {
private:
  vector<int> parent;
  vector<int> size;

public:
  UnionFind(const size_t &sz) : parent(sz), size(sz, 1) {
    for (size_t i = 0; i < sz; i++)
      parent[i] = static_cast<int>(i);
  }

  int find_set(int v) {
    if (v == parent[v])
      return v;
    return parent[v] = find_set(parent[v]);
  }

  void union_sets(int a, int b) {
    a = find_set(a);
    b = find_set(b);
    if (a != b) {
      if (size[a] < size[b]) {
        swap(a, b);
      }
      parent[b] = a;
      size[a] += size[b];
    }
  }

  int ans() const {
    vector<int> tmp(size.begin(), size.end());
    sort(tmp.begin(), tmp.end(), std::greater<int>());
    return tmp[0] * tmp[1] * tmp[2];
  }
};

Here’s my simple implementation weighting by size, I add an ans() function to return the size of the 3 largest disjoint circuits

With these two setup we can finally solve our Part A

Part A
#

8a.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
ClosestPoints cp(boxes);
UnionFind uf(boxes.size());

pair<int, int> prev = {-1, -1};
for (int i = 0; i < 1000 && !cp.pq.empty(); i++) {
  Pair p = cp.pq.top();
  cp.pq.pop();

  if (p.pt1 > p.pt2) {
    swap(p.pt1, p.pt2);
  }

  // handle repeated pairs
  pair<int, int> curr = {p.pt1, p.pt2};
  if (curr == prev) {
    i--;
    continue;
  }

  uf.union_sets(p.pt1, p.pt2);

  prev = curr;
}

Simply pop a pair from the priority_queue and put it into union find.

You may notice I have to handle duplicate pairs. Remember in my closest points implementation, every time I compare two points I add them to the priority queue? Well we initially sample 1000 pairs to approximate a minimum distance d before we finally iterate over every candidate pair in the resultant grid. This led to duplicate pairs being formed in the priority queue that I have to handle here. Welp, sorry about that.

Part B
#

Now we have to continue until we form a single circuit. It’s a simple extension to Part A, we just keep going until Union Find does not have any disjoint sets, which is a simple enough check

int count_sets() const {
  int sum{0};
  for (size_t i = 0; i < parent.size(); i++) {
    if (parent[i] == static_cast<int>(i))
      sum++;
  }
  return sum;
}
8b.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
while (!cp.pq.empty()) {
  Pair p = cp.pq.top();
  cp.pq.pop();

  if (p.pt1 > p.pt2) {
    swap(p.pt1, p.pt2);
  }

  uf.union_sets(p.pt1, p.pt2);

  if (uf.count_sets() == 1) {
    cout << boxes[p.pt1].x * boxes[p.pt2].x << "\n";
    return EXIT_SUCCESS;
  }
}

I no longer have to handle duplicates since I don’t care about how times I union the same pair of points! Union-Find handles that for me, so cute!

Wow, that was some journey. Apparently I could have just sorted the distance between every possible pair, that wouldn’t have been too slow

Day 9: Movie Theatre
#

If you thought Day 8 was bad, let me please please please make you more disappointed with me. At least Day 8 was somewhat performant and well intentioned. Look at my runtime for Part B 😭

time python day9_b.py
1351617690

________________________________________________________
Executed in   25.15 secs    fish           external
   usr time   25.04 secs    0.14 millis   25.04 secs
   sys time    0.01 secs    1.01 millis    0.01 secs

I truly apologise :(

Given a grid like below

..............
.......#...#..
..............
..#....#......
..............
..#......#....
..............
.........#.#..
..............

We have to work with the rectangles formed by the points on the grid

Part A
#

Just says to find the largest possible rectangle of any two points, this is a simple brute force

9a.py
1
2
3
4
max_area = 0
for i in range(len(points)):
    for j in range(i + 1, len(points)):
        max_area = max(max_area, area(points[i], points[j]))

Part B
#

Now we come to the interesting problem. The list of points given to us, in the order given, always forms a polygon

97498,50350
97498,51566
97867,51566

input looks something like this, notice the alternating x and y coordinates.

The challenge is now to find the largest rectangle

  • between two points
  • that fits inside the given polygon

So we do our earlier thing, but now check if it’s inside the polygon??? When I was going through the interval trees for day 5 I saw somewhere that you could store polygons as sets of intervals, an x-intervaltree for the horizontal edges, and y-intervaltree for vertical edges

So that’s what I did. Now to check if a rectangle formed by our two points lies inside the polygon, we need to ensure two things

  1. No edge of the rectangle intersects any edge of the polygon (i.e. its wholly inside or outside)
  2. The center of the rectangle is inside the polygon

The first is where our interval tree shines (this is sarcasm, literally all the execution time goes in that check)

def intersecting_edges(a, b):
    x1, y1 = a
    x2, y2 = b
    x_min, x_max = min(x1, x2), max(x1, x2)
    y_min, y_max = min(y1, y2), max(y1, y2)

    for iv in xtree.overlap(x_min, x_max):
        if y_min < iv.data < y_max:
            return True

    for iv in ytree.overlap(y_min, y_max):
        if x_min < iv.data < x_max:
            return True

    return False

To check if the rectangle created by two points a and b intersects any other edge we use the intervaltree.overlap() method which returns the edges that overlap the given interval, and we check the remaining coordinate of that edge if it lies inside our rectangle. If it does, clearly it intersects it.

def ray_casting(a, b):
    mid = [(a[0] + b[0]) // 2, (a[1] + b[1]) // 2]
    h_edges = xtree.at(mid[0])

    above = 0
    for e in h_edges:
        if e.data > mid[1]:
            above += 1
    
    left = 0
    v_edges = ytree.at(mid[1])
    for e in v_edges:
        if e.data < mid[0]:
            left += 1
    
    return above % 2 != 0 or left % 2 != 0

Once we get our rectangle, we can throw a vertical edge through its centroid. If that edge passes through an odd number of polygon edges, it is inside it, otherwise it must be outside the polygon. Having a rectangle that’s just a line (same x/y coordinate) adds some difficulty which is why I check against both horizontal edges and vertical edges (I should probably just ignore the 1D rectangles though)

9b.py
1
2
3
4
for i in range(len(points)):
    for j in range(i + 1, len(points)):
        if not intersecting_edges(points[i], points[j]) and ray_casting(points[i], points[j]):
            max_area = max(max_area, area(points[i], points[j]))

Can you see the slight optimisation? My most expensive function is definitely the .overlap() method in the intersecting_edges check, so if I only compute that for candidate rectangles with area greater than current max I can save some time.

a = area(points[i], points[j]) 
if a > max_area:
    if not intersecting_edges(points[i], points[j]) and ray_casting(points[i], points[j]):
        max_area = max(max_area, a)

This saves me about 6-7 seconds (😉)

Here’s what the polygon looks like btw

polly

Eventually I’ll do something better, jverkamp day9 looks interesting

Day 10: Factory
#

This was a really interesting one, and originally stumped for a long, long time. Eventually I gave up and took a hint from @jverkamp’s writeup

The premise is as follows

[.##.] (3) (1,3) (2) (2,3) (0,2) (0,1) {3,5,4,7}

Your input line has 3 parts, the expected light diagram, the buttons, and the required joltages

Part A
#

Each button toggles the light specified and we have to reach the final light diagram from all off LEDs in minimal button presses.

My initial approaches looked like trying to find a linear combination of buttons whose values XORed to the required configuration, but I couldn’t make it minimal and gave up.

A long time later, I took a hint and saw the word “BFS”

We can do a breadth first search over the possible light configurations, applying the button presses at each iteration, and the first configuration to match is guaranteed to have minimal button presses!

10a.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
q = deque()
q.append([0, [False] * len(light_diagram)])
visited = set()
visited.add(tuple([False] * len(light_diagram)))

minimum_presses = len(buttons) + 1
while len(q) != 0:
    presses, lights = q.popleft()
    if presses > len(buttons):
        break
    for button in buttons:
        candidate_presses = presses
        candidate_lights = lights.copy()
        for x in button:
            candidate_lights[x] = not candidate_lights[x]
        candidate_presses += 1
        if candidate_lights == light_diagram:
            if candidate_presses < minimum_presses:
                minimum_presses = candidate_presses
                break
        state_tuple = tuple(candidate_lights)
        if state_tuple not in visited:
            visited.add(state_tuple)
            q.append([candidate_presses, candidate_lights])

Part B
#

Part B adds a lot more constraints, and now joltage plays a part too. Each time a button is pressed, the joltage for those LEDs goes up, and now we must find the minimal button presses that also solve our joltage requirements

This adds a lot of constraints, and looking at the hint again, I decided to use z3

We add z3 variables for the number of times each button is pressed, and the buttons themselves. The XOR constraints ensure the final light configuration is reached, and the number of presses ensure that joltage requirements are reached

10b.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
butts = [ z3.Bool(f"b_{j}") for j in range(len(buttons)) ]
press = [ z3.Int(f"p_{j}") for j in range(len(buttons))]

s = z3.Optimize()

for i, light in enumerate(light_diagram):
    req_buttons = []
    for j, button in enumerate(buttons):
        if i in button:
            req_buttons.append(j)

    e = butts[req_buttons[0]]
    for r in req_buttons[1:]:
        e ^= butts[r]                       # this is basically what i was trying for part a earlier

    e = (e == light)
    e = z3.simplify(e)
    s.add(e)

for i in range(len(light_diagram)):
    e = 0
    for idx, button in enumerate(buttons):
        s.add(press[idx] >= 0)
        if i in button:
            e += press[idx]
    e = (e == joltage_requirements[i])
    e = z3.simplify(e)
    s.add(e)

total_press = z3.Sum(press)
s.minimize(total_press)

if s.check():
    m = s.model()
    tot = m.evaluate(total_press)
    answer += tot.as_long()
else:
    print(f"[-] ERROR no solution found")

Day 11: Reactor
#

This was a pretty straightforward day, simple graph algorithms

aaa: you hhh
you: bbb ccc
bbb: ddd eee
ccc: ddd eee fff
ddd: ggg
eee: out
fff: out
ggg: out
hhh: ccc fff iii
iii: out

Given adjacency lists like do stuff

Part A
#

Count the number of paths from you -> out. Apply straightforward BFS starting from you and increment by 1 everytime you reach out?

11a.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
long long num_paths{0};
deque<string> bfs_q{};
bfs_q.push_back("you");

while (!bfs_q.empty()) {
  string node{bfs_q.front()};
  bfs_q.pop_front();

  if (node == "out") {
    num_paths++;
    continue;
  }

  visited[node] = true;
  for (const string &n : graph[node]) {
    if (!visited[n]) {
      bfs_q.push_back(n);
    }
  }
}

Part B
#

Now we need to ensure every path to out goes through the nodes dac and fft.

What I did here is attach a state tuple to the BFS queue that tracks whether those nodes were visited or not, and I only increment my counter if they were both visited. This time however we start from the node svr which has a LOT more paths to out, and can no longer perform standard BFS. We need memoisation.

So we create a memo and make our implementation recursive to accomodate the memo

11b.cpp
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
long long count_paths(node_t state,
                      const unordered_map<string, vector<string>> &graph,
                      map<node_t, long long> &memo) {
  string node;
  bool v_dac;
  bool v_fft;
  tie(node, v_dac, v_fft) = state;

  v_dac = node == "dac" ? true : v_dac;
  v_fft = node == "fft" ? true : v_fft;

  if (node == "out") {
    if (v_dac && v_fft) {
      return 1LL;
    } else {
      return 0LL;
    }
  }

  node_t check = make_tuple(node, v_dac, v_fft);

  if (memo.count(check)) {
    return memo[check];
  }

  long long total{0};
  for (const string &n : graph.at(node)) {
    total += count_paths(make_tuple(n, v_dac, v_fft), graph, memo);
  }

  memo[state] = total;
  return total;
}

And inside of main

map<tuple<string, bool, bool>, long long> memo{};
long long num_paths{count_paths(make_tuple("svr", false, false), graph, memo)};

Day 12: Christmas Tree Farm
#

There was only one part to this yay. Also my solution was such a fluke I didn’t even try to accurately answer the problem

Basically the problem is we’re given a lot of shapes, and have to fit the required shapes into regions of size A x B

Shapes look like

4:
###
#..
###

5:
###
.#.
###

etc, and a region would look like

4x4: 0 0 0 0 2 0

which says we have two instances of shape 4 into a region of size 4x4.

Part A
#

We have to count the number of regions that can fit the shapes they specify. The absolute most rudimentary to check this would be a simple area check. If the total shapes area is <= the area of the region, it might fit. So let’s count that first

for (const region_t &region : regions) {
  long long area = region.first.first * region.first.second;
  long long required_area{0};
  for (size_t i = 0; i < region.second.size(); i++) {
    if (region.second[i]) {
      required_area += region.second[i] * shape_area(shapes[i]);
    }
  }
  if (required_area <= area) {
    answer++;
  }
}

And thankfully for us this is the required answer. Otherwise we were screwed, I can’t think of anything, nor did I find any other solution, besides just backtracking and checking every combination of shapes on that grid.


That was an experience, and a rather fun one.

Reply by Email

Related

ruid_login
pwn 460 pts 61 solves ret2stack
leak pie addr and stack addr to jump to shellcode
swe_intern
web 361 pts 167 solves git git-dumper path traversal
git-dumper
Advanced Packaging Threat
for rev 467 pts 52 solves rust wireshark chacha20 bash malware
is this really a forensics challenge???