u/leetgoat_dot_io

LeetCode Weekly Contest 496 - How I solved them

LeetCode Weekly Contest 496 - How I solved them

Overall a 6/10 difficulty contest I'd say.

EDIT: I wrote the above cause a lot of people ask for how hard the contests are, I'm just giving my opinion for that is all

Q1: Kind of an implementation battle. I recommend just making functions like mirror(char) to make it easy. Also a good trick to avoid duplicates is just to hash them like always put the smaller letter first, and store tuples of ones we have considered, in a set. so (a, z) goes into a set, and when we get to (z, a) it gets hashed to (a, z) and de-duped.

Q2: Since N is large we cannot enumerate it. Instead, observe the maximum of a single factor can be n^1/3. Now just enumerate pairs of these options which gives us n^2/3 complexity. Track how many numbers get at least two occurrences.

Q3: If N is odd it is easy, we make the indices 1, 3, 5, ... peaks.

If it is even we run into some tricky cases...

#    1 | (2) 3 (4) 5 (6) 7 | 8
#    1 | 2 (3) 4 (5) 6 (7) | 8
#    1 | (2) 3 (4) 5 6 (7) | 8
#    1 | (2) 3 4 (5) 6 (7) | 8

Numbers inside the () are peaks. We can basically put a single gap of 2 in multiple places. But note we cannot put it at i=1 or i=n-2

To find the best answer, I used a prefix + suffix to track the scores on the left and right regions, and enumerated the possible locations for the gap. Lots of edge cases.

Q4: dp(index, peaksLeft, didTakeFirst) is n x k x 2 complexity. A knapsack DP.

Note that this is too much in python so you can either attempt a cursed bottom-up solution that only uses K memory, but it's annoying since we have 2 dp arrays and need to preserve the last 2 layers for the recurrence. Or you can just use C++.

We could also observe that k is at most n/2 but even then it's sus in python.

u/leetgoat_dot_io — 14 hours ago

Notes on Small to Large Merging [Advanced]

Here's some notes on small to large merging. Written by me (a human), not AI. Please note this is a more advanced topic and not for interviews, but it does appear in LeetCode.

1: Naive version

Say we have a DFS function and we call it on every node in the tree. At each node, we iterate over all pairs of children inside. It looks O(n^3) because we do n^2 work at n nodes but it is actually O(n^2) because we can see every pair of nodes only gets considered once, at their LCA.

def dfs(node):
  for child1 in subtree[node]:
    for child2 in subtree[node]:
      # do something

2: Speedup with small to large merging

Now instead of looping over pairs of children we will just loop over children. Each child contains some data, with size c where c = size of that child tree.

def dfs(node):
  accumulator = []
  for child in children[node]:
    childData = dfs(child) # the size of this is the size of that child tree
    if len(accumulator) < len(childData): accumulator, childData = childData, accumulator # crucial small to large to get O(n log n)
    for val in childData:
      # do work
    for val in childData:
      # safely update accumulator

Proof of O(n log n) time complexity: Consider any element e. It can be in the smaller bucket at most log N times. Every time it's in the small bucket, it gets merged into a larger bucket of at least the same size, meaning the container size doubles. The container size can double at most log N times.

3: Simpler version of small to large merging that is O(n log n)

I have found instead of swapping accumulator and childData we can just pick the heaviest child as the root container and merge everything else in. This is because if we initialize the accumulator on the largest child, then every other child bucket would be smaller, meaning the bucket size doubles. The previous argument then holds.

def dfs(node):
  childrenData = [dfs(child) for child in children[node]] # a bunch of buckets, each bucket is the size of that child tree
  childrenData.sort(key = lambda x: len(x), reverse=True)
  heavyChild = childrenData[0]
  for i in range(1, len(childrenData)):
    # merge this child into our root

4: Traps

It is not safe to execute O(heavyChild) work in each node, like this:

def dfs(node):
  childrenData.sort(key = lambda x: len(x), reverse=True)
  heavyChild = childrenData[0]
  newDataStructure = fn(heavyChild) # takes O(heavyChild) work, NOT SAFE

Imagine a stick tree, we would do 1+2+3+...+N = O(n^2) work.

Example bad submission (that somehow still passed): https://leetcode.com/problems/maximum-xor-of-two-non-overlapping-subtrees/submissions/1967670898/

The fix is to re-use structures.

def dfs(node):
  childrenData = [dfs(child) for child in subtree[node]]
  childrenData.sort(key = lambda x: len(x), reverse=True)
  heavyChildBinaryTrie, heavyChildrenValues = childrenData[0] # re-using our heaviest child structure
  for i in range(1, len(childrenData)):
    lightChildBinaryTrie, lightChildValues = childrenData[i]
    # now we can loop over each light child value and update the heavyChildBinaryTrie, the lightChildBinaryTrie gets thrown away
    for v in lightChildValues:
      # update result here
    for v in lightChildValues:
      # update the accumulator (separate step to not pollute the accumulator in one-pass
    for v in lightChildValues:
      heavyChildrenValues.append(v) # extend the element list (also can do these in the previous loop)
  return heavyChildBinaryTrie, heavyChildrenValues

We also cannot do something like allValues = [val for childData in childrenData for val in childData] because this is going to loop over heavy values. Golden rule: We cannot do heavy work in a node.

Instead, just append the list of light values to the heavy values at the end, like the above code.

5: Sorting is safe

Note that we can safely sort children inside a node, and it doesn't break the O(n log n) invariant:

def dfs(node):
  childrenData = [dfs(child) for child in subtree[node]]
  childrenData.sort(key = lambda x: len(x), reverse=True) # this is safe! because every node gets considered in the sort once

If anything, sorting two lists of size n/2 is faster than a single sort on n so this is fine performance wise. But it isn't necessary. We could locate the heavy child in an O(n) pass anyway.

6: Separating the accumulators from the data we send up

Note that accumulators can use separate data than the actual values we send up. For instance if we want the max XOR of any two non-overlapping subtree sums, we can send up sums of subtrees, and bit tries for accumulators.

7: Piecing it together

Here's a sample solution combining all of the above concepts. It is O(n * B * log n) complexity: https://leetcode.com/problems/maximum-xor-of-two-non-overlapping-subtrees/submissions/1967703802/

reddit.com
u/leetgoat_dot_io — 2 days ago
Road to solving EVERY LeetCode problem (3,153 solved) - Week 8 progress update!
🔥 Hot ▲ 302 r/LeetcodeDesi

Road to solving EVERY LeetCode problem (3,153 solved) - Week 8 progress update!

Week 8 progress update - reporting live from the infusion clinic once again! I take this medicine every 4 weeks for an autoimmune disease. They also take hours which is a good time to solve problems (and write posts hehe).

2 months ago I started my challenge to finally finish every LeetCode problem this year 😭

I solved 33 questions this week:
-3 easy
-19 medium
-11 hard

My favorite question was "3700. Number of ZigZag Arrays II", I used matrix exponentiation to solve it in O(8 * m^3 * logn) time.

My goal this week is to solve 20 problems. Going skiing! So might not have time...

Week 0: 2895/3832 - 937 remain Reddit · LinkedIn

Week 1: 2958/3837 - 879 remain (solved 63) Reddit · LinkedIn

Week 2: 2992/3846 - 854 remain (solved 34) Reddit · LinkedIn

Week 3: 3020/3851 - 831 remain (solved 28) Reddit · LinkedIn

Week 4: 3049/3860 - 811 remain (solved 29) Reddit · LinkedIn

Week 5: 3068/3865 - 797 remain (solved 19) LinkedIn

Week 6: 3099/3874 - 775 remain (solved 31) LinkedIn

Week 7: 3120/3879 - 759 remain (solved 21) Reddit · LinkedIn

Week 8: 3153/3888 - 735 remain (solved 33) LinkedIn

Profile: https://leetcode.com/u/leetgoat_dot_io/

LET'S GET THIS!!!

u/leetgoat_dot_io — 3 days ago