Sensing Algorithms

5 minute read

Published:

Today I had a very strange and quite disturbing experince. I was finally able to get an interview at Apple, company of my dream. It was not easy to get, but I was lucky to run into hiring manager at Stanford event. After a screening call by recruiter, first round interview date was set up for an hour long interview. Manager asked about my reserach and how i use data analysis / signal processing in my daily work. They also told me about what they do, and basically explained that this team is responsible for developing algorithms to readout display data.

Technical Question

Next, the hiring manager switched to asking me a technical question. They shared a screen with formulas as below, and then explained the task orally, so you would have to believe me that I remembered the wording of the problem exactly.

Question 1

Consider random variables

\[\displaylines{ y_1 = x + n_1, \\ y_2 = n_1 + n_2 }\]

where \(n_1 \sim \mathcal{N}(0,\sigma_1^2)\), and \(n_2 \sim \mathcal{N}(0,\sigma_2^2)\) are independent normally distributed random variables and \(x\) is a parameter. What is an expression for optimal estimator of \(\hat{x}\)?

Solution 1

I immediately said that the solution should have the form \(y_\beta = y_1 + \beta y_2\) with yet undetermined coefficient \(\beta\). Expected value \(\mathbb{E} [y_\beta] = x\), so average value \(\langle{ y_\beta }\rangle{} = \frac{1}{N}\sum_{i=1}^N y_\beta^{(i)}\) is an unbiased estimator for \(x\) independently of value of \(\beta\).

However, question asked for an optimal estimator. I said that optimal estimator should not only have zero bias, but also have minimal variance.

\[\operatorname{Var}[y_\beta] = \mathbb{E}{(y_\beta-x)^2} = \mathbb{E}{(n_1 + \beta [n_1+n_2])^2}\] \[\frac{\partial }{\partial \beta} \operatorname{Var}[y_\beta] = \mathbb{E}{(n_1 + \beta [n_1+n_2])(n_1+n_2)} = \sigma_1^2 + \beta (\sigma_1^2 + \sigma_2^2)\]

Minimizing variance over \(\beta\), I find \(\beta = -\sigma_1^2/(\sigma_1^2+\sigma_2^2)\) and finally bias–free and minimal–variance estimator for \(x\) would be

\[\hat{x} = \frac{1}{N}\sum_{i=1}^N \left[ y_1^{(i)} - \frac{\sigma_1^2}{\sigma_1^2 + \sigma_2^2} y_2^{(i)} \right].\]

Recruiter was pleased 🙂 with the answer, but told me that they expected a different solution, using an alternative definition of optimal estimator through Maximum Likelihood, they said:

as it is done in cs229.

After the interview, I also solved this question using Maximum Likelihood Estimator (MLE) and you can find this solution in the notes I attach to this post. Funny enough, estimator obtained from maximizing likelihood and minimizing variance is not always the same, moreover MLE sometimes has non-zero bias.

Follow-up to the technical question

Next they asked me a follow-up: now make \(x\) from the previous question a gaussian random variable \(x \sim \mathcal{N}(0,\sigma_0^2)\). Find estimator for \(x\).

Question 2

Consider random variables

\[\displaylines{ y_1 = x + n_1, \\ y_2 = n_1 + n_2 }\]

where \(n_1 \sim \mathcal{N}(0,\sigma_1^2)\), \(n_2 \sim \mathcal{N}(0,\sigma_2^2)\) and \(x \sim \mathcal{N}(0,\sigma_0^2)\) are independent normally distributed random variables. What is an expression for optimal estimator of \(\hat{x}\)?

Trying to solve Question 2

I was confused 😕 I could not understand what it means to estimate a random variable, so asked the interviewer to explain the question to me again. They just repeated what they said before without giving me any new information…

I had to improvise on the spot since the recruiter was expecting me to solve the problem as is, even though it made no sense from mathemtical point of view. I interpreted the question as follows

Question 2.2

Consider random variables

\[\displaylines{ y_1 = x + n_1, \\ y_2 = n_1 + n_2 }\]

where \(n_1 \sim \mathcal{N}(0,\sigma_1^2)\), \(n_2 \sim \mathcal{N}(0,\sigma_2^2)\) and \(x \sim \mathcal{N}(x_0,\sigma_0^2)\) are independent normally distributed random variables and \(x_0\) is the only unkown parameter of the distribution. What is an expression for optimal estimator of \(\hat{x}_0\)?

Solution 2.2

Again, let \(y_\beta = y_1 + \beta y_2\), average \(\langle{y_\beta}\rangle{}\) is a bias–free estimator for \(x_0\) for any value of \(\beta\) since \(\mathbb{E}{y_\beta} = \mathbb{E}{x} = x_0\). Let’s minimize variance of our estimator.

\[\operatorname{Var}[y_\beta] = \mathbb{E}{(y_\beta-x_0)^2} = \mathbb{E}{(n_1 + x-x_0 + \beta [n_1+n_2])^2},\] \[\frac{\partial }{\partial \beta} \operatorname{Var}[y_\beta] = \mathbb{E}{(n_1 + x-x_0 + \beta [n_1+n_2])(n_1+n_2)} = \sigma_1^2 + \beta (\sigma_1^2 + \sigma_2^2) = 0\] \[\beta = \frac{-\sigma_1^2}{\sigma_1^2+\sigma_2^2}.\]

So the answer to Question 2.2 is same as to Question 1. This is actually is quite surprising and interesting result, which makes me think maybe there is a simple argument why this is true.

I showed recruiter the solution, hoping that this is what they meant when they asked me Question 2, but recruiter did not approve my answer 😒

I did not know what else I could do, so I just waited for recruiter to lead the interview. They looked at my solution that I showed on the piece of paper and also did not find anything to say. It felt like they expected a different answer, but also could not pinpoint what was wrong with my solutions. Finally, interviewer told me that we should talk about something else, and after some chatting about other positions at Apple the interview came to an end.

Conclusion

I still do not understand what was I expected to say.

  • Was it a trick question and interviewer wanted me to insist that the question they were asking is poorly defined?
  • Was the interviewer geniunly stupid and did not understand that the question they ask is invalid?
  • Did I misheard the question statement twice?
  • Did they actually expect me to find and estimator for \(\sigma_0\)? Based on a conversation we had, I do not think so. It would also recuire humongous amount of calculation…

Solutions: Download PDF