Why does 32-bit bitmasking work in Python for LeetCode "Single Number II" when Python ints are arbitrary precision?

Why does 32-bit bitmasking work in Python for LeetCode "Single Number II" when Python ints are arbitrary precision?

I'm trying to understand why the following solution for LeetCode's Read more works in Python:

class Solution:
    def singleNumber(self, nums: List[int]) -> int:
        number = 0
        for i in range(32):
            count = 0
            for num in nums:
                if num & (1 << i):
                    count += 1
            if count % 3:
                number |= (1 << i)
        
        if number & (1 << 31):
            number -= (1 << 32)

        return number

But I'm confused about a few things:

In Python, integers are arbitrary precision, so they're not stored in 32-bit like in C or C++. sys.getsizeof() even shows 28 bytes for small integers. So how can we assume that the number fits in 32 bits and that bit 1 << 31 is actually the sign bit?

Why do we loop only from i in range(32)? Why not less

If I input small integers (like 2, 3, etc.), they don't "fill" 32 bits — so how can checking the 31st bit be reliable?

Basically, since Python ints grow as needed and aren’t stored in 32-bit containers, how does this approach still work correctly when simulating 32-bit signed behavior?

I tried understanding similar questions and answers (like on CodeReview.SE), and I get the general idea — Python ints are arbitrary precision and we simulate 32-bit behavior using bitmasking and shifting. But I'm still confused why this actually works reliably in Python.

My Questions:

Why can we safely assume a 32-bit simulation works in Python?

Why is checking the 31st bit (1 << 31) meaningful in Python?

Why doesn’t the arbitrary-size nature of Python integers break this logic?

Answer

tldr; It works because you're manually simulating 32-bit behavior using bitwise logic.

Python integers can grow to any size, but in that solution, it acts like every number is exactly 32 bits long.

Instead of worrying about how many bits Python actually uses, it only looks at bits 0 through 31. It’s like saying,

“I don’t care how the number is really stored , I’ll just check the first 32 bits.”

This way, you simulate 32-bit behavior by manually checking each bit’s frequency and building the result accordingly. It is not depending on Python’s internal storage, but applying fixed 32-bit logic instead.

Enjoyed this article?

Check out more content on our blog or follow us on social media.

Browse more articles