Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (0 / 5): ☆☆☆☆☆
Lecture Rating (0 / 5): ☆☆☆☆☆
Difficulty (1.8 / 5):
Workload: 8 hours/week
This class gives back what you put into it. If you just want to pass and get an A you can do that without too much effort. However, if you want to really dive deep into Android Programming, this class gives you the tools to start doing that and get familiar with the Android environment. Personally I did not watch the lectures after the first few weeks as I didn't feel the videos were very useful, so I just did the assignments with a lot of Googling and finding specific relevant parts of the lectures that I needed. The assignments themselves are mostly fill-in-the-blank stuff, where the prof and TAs set up the hard stuff for you in starter code and you just have to finish it off by figuring out what's left to make the apps work. The final project is a full app from scratch, and it's pretty fun. People made some very cool stuff and put a lot of effort in. Overall, this was a pretty fun class and a good intro to Kotlin/Android. Also note that he says you should know Java or Kotlin when you come in, but I didn't know either and still kept up fine, you will learn along the way.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (3.6 / 5): ★★★★☆
Lecture Rating (3.6 / 5): ★★★★☆
Difficulty (3.6 / 5):
Workload: 15 hours/week
Pros:
1. Very good overview of deep learning in relation to pytorch
2. Assignments were manageable and challenging when appropriate
3. Course schedule allowed you to work ahead with good time management
Cons:
1. Final project can be stressful and confusing without much guidance
2. Quizzes can be pesky
3.
Detailed Review:
Overall this is a very good class that covers a reasonably good amount of relevant information on deep neural networks and different types of deep learning. Other people have mentioned that the content is becoming a little outdated, which is somewhat true; however, the more up-to-date models and frameworks in the field are addressed in the last lectures, albeit without too much depth or any evaluation. The course could probably use a bit of a revamp soon to make it fresher and a little more relevant.
The assignments are reasonable and provide a nice neural network in pytorch introduction with #1 and 2. Assignments 3 and 4 ramp up quite quickly, so it is advantageous to work ahead to try to complete 1 and 2 very quickly and save more time for the more challenging assignments. These also will require more training time (could be up to 10-12 hours), so plan accordingly for how much time the model needs to train.
Quizzes can be sneaky and tricky, especially for computing gradients. You get two attempts per question, so make sure that you carefully review computations, as the 10% for quizzes can matter at the end. Also, DO NOT forget to complete the quizzes on time, as they are at a self-directed pace.
As mentioned before, the final project continues to be difficult and stressful as there is minimal guidance on what approaches to take. Unless you are very confident in your abilities, I would choose to work in a group. Our group went for a state-based agent, and while we were just over the lower quartile for the code portion, our report received very high marks. Suggestion: work extra hard on the report and make sure that you cover every little item in the rubric, as well as format using Latex/Overleaf and make it look like a professional research paper. Since the code is worth only 9%, even a "failing" implementation can still give you a good mark in the class if you perform reasonably well on everything else. We tried many different approaches to the learning for the state-based agent, but ultimately our performance sort of flat-lined.
Overall this was one of the better courses in the program and I came away with a much better sense of how deep networks train, and how different parameters/choices of architecture can improve performance.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (0 / 5): ☆☆☆☆☆
Lecture Rating (0 / 5): ☆☆☆☆☆
Difficulty (2.9 / 5):
Workload: 10 hours/week
The name of the game is copy the algorithm. Each homework requires useing pseudocoded algorithms from the textbook that need to be followed to the letter. The nuances are tricky at first, but you'll understand the notation with enough practice. Really interesting course content. Teacher are okay, exam is a bit rough. It gets curved though, don't worry. Definitely take Deep Learning first.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (0 / 5): ☆☆☆☆☆
Lecture Rating (0 / 5): ☆☆☆☆☆
Difficulty (3.9 / 5):
Workload: 12.5 hours/week
Both Klivans and Liu are good. I like Klivan gave you rich content but that means you need to navigate your way out. Liu was on the other side guide you through the maze. Textbook was half useless with most of the examples are really abstract.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (5 / 5): ★★★★★
Lecture Rating (5 / 5): ★★★★★
Difficulty (3.9 / 5):
Workload: 15 hours/week
Pros:
1. Literally the title - best class to end on if you plan to take all of the ML classes (I will explain why below)
2. Professor was very knowledgeable, explained concepts well, and actually tried to engage on the discussion forum
3. Homeworks have a ton of the boring boilerplate code already done for you, so you can focus on actually applying the NLP concepts
4. You actually are encouraged to use LLMs which was really funny and cool to me.
Cons:
1. I never like classes with final projects and this one was time consuming and took a very long time to actually wrap your mind around and understand what you had to do (but graded very nicely)
2. Having to grade your peers' homeworks is always annoying.
3. It's annoying that you're forced to do all of the problem sets after each module. They're all due at the very end of the class and I think most people naturally just wait till the very end to be done with them. Very much a "came in one ear, came out the other ear" situation.
4. (Slight con) Lectures became hard to follow at the very end of the course, I think both because concepts became more obscure and we were focused more on the final project.
Detailed Review:
Professor does a fantastic job in a very clear and concise way at reviewing quite literally the fundamentals of all the other ML classes in the program (ML/DL/RL) at the very beginning of the class. It did a great job to reinforce everything I have learned and help me really understand what the ultimate motivation of ML actually is. And that was exactly the order I took the classes, with this being the last, so I thought it ended up perfect. I definitely would not take this class first, nor would I take it before DL since you definitely do use neural networks, but you could take it before RL since there's little overlap.
Reiterating the above, professor was great. Definitely very knowledgeable and cares a ton about the class. Definitely came out of here, having a much better understanding of NLP.
I think the hours per week is a little misleading that you see. It's lopsided imo: you do not have something due every week, and some homeworks were pretty easy. But the final project took a lot of time, so it's hard to give an overall average. I did take this class with another class (SIMPL) and it was doable, but not without a lot of questioning my life's decisions, especially when SIMPL had it's big assignment/exams due and when this class had its midterm and final paper due. Looking bad, I still think its doable if anyone is interested in doubling up and trying to finish the program ASAP.
Homeworks were fair, some were easy, I think a couple were a little tough. But overall not too bad. And they were designed very well too to help reinforce the concepts.
Midterm was fair, but definitely had a lot of questions that were tricky. You really need to understand what is going on for the conceptual questions, and really need to understand how to do the math calculations, especially attention.
Final project was personally a big time sink but again that was because it took me a really long time to understand what actually had to be done. I think the professor kept it very open ended on purpose to give us freedom, but most of us really just wanted to do the right amount to get an A. Assuming it doesn't change, hopefully this helps: You are asked to choose some NLP model (four are specifically recommended to you) and then choose some standard dataset (again, a few are recommended to you) and analyze with the model and figure out how to fix its shortcomings. Then you given of a ton of papers that did essentially just that, in many, many different ways. My advice to you is that your grade is ultimately determined by how much effort/how unique your approach was, or well really, how many "twists" you applied to the basic path here: Let's say you pick a model, pick a dataset, find where the model's shortcomings are (there was emphasis here), create a new dataset based on a shortcoming you found, train on that new dataset (you can just an LLM to generate this), then show the results on the old dataset and the new dataset, and analyze and ascertain what happened and why (there is massive emphasis on this final part). That approach is basically what is more or less outlined in the last homework and discussed in class, and would probably get you somewhere in the B range on the project. To get a higher grade comes down to the number of "twists" you apply to this basic path. What if you choose two models instead and follow the same path and then compare them both? What if instead of one dataset, you chose two datasets instead? What if instead of generating one dataset based on one shortcoming, you generate datasets for each shortcoming you found? How could you go above and beyond just doing a basic analysis of the results (a couple papers do this in clever ways)? Really just think about the ways you can enhance the basic path and make it more involved/complex, as the more effort is shown, the better your grade will be. Do that, and make sure to really analyze your results well and you will definitely get an A.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (0 / 5): ☆☆☆☆☆
Lecture Rating (0 / 5): ☆☆☆☆☆
Difficulty (0.7 / 5):
Workload: 8 hours/week
Probably the easiest class you'll take. Most things are open notes/open book, and the teachers are very focused on you learning so homeworks can be turned in late and redone if you don't get full credit. I didn't find the material super easy to pick up as I don't have a strong math background, but given it's mostly open notes it turned out fine. I wish this were a theory course as there wasn't much practical info related to what we covered in RL/DL, but maybe it goes better with ML.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (0 / 5): ☆☆☆☆☆
Lecture Rating (0 / 5): ☆☆☆☆☆
Difficulty (2.9 / 5):
Workload: 12 hours/week
The first half of the course had hard Homeworks. I specifically remember the first homework being the hardest one, I skipped the 2nd one (one hw is dropped). I thought the first exam was actually pretty straightforward, and the curve was nice. Second half of the course was much much easier. The second exam seemed hard so I thought I was one of the few who did well but everyone apparently did well on it.
Overall Rating (3.9 / 5): ★★★★☆
Professor Rating (0 / 5): ☆☆☆☆☆
Lecture Rating (0 / 5): ☆☆☆☆☆
Difficulty (1.8 / 5):
Workload: 10 hours/week
Very well taught course. Professors are nice and responsive and the workload is very manageable. You'll get good practice solving math problems involving vectors/matrices, as well as using MATLAB. Might help to brush up on your undergrad linear algebra beforehand (but not vital) - you can check out their undergrad course, or maybe Gilbert Strang's (MIT) 18.06 course on MIT OCW.
Overall Rating (3.6 / 5): ★★★★☆
Professor Rating (4.3 / 5): ★★★★☆
Lecture Rating (3.6 / 5): ★★★★☆
Difficulty (1.4 / 5):
Workload: 5 hours/week
Pros:
1. Optimization: found it very relevant as a data scientist - I actually decided to use Nesterov acceleration for an NLP project to speed up training time.
2. Online learning: interesting topic and also can be relevant for data scientists.
3. Both professor were fairly active on Piazza.
Cons:
1. Peer grading - most were chill - I just hate peer grading.
2. Since optimization is its own course (with quite a bit of overlap I'm told), I'd rather wish OL portion was more fleshed out as a full course. There is no reason to combine these two, as they are fairly standalone topics.
Detailed Review:
I'm just glad I didn't have to take Optimization. Both halves of the course were decent. The two halves course is something I'm still getting used to - it was quite similar to ML - except these are fairly disjoint topics. In the first half (optimization), the lectures were quite verbose going through all the proofs in detail, while in the second half (bandits), the lectures rarely covered any proofs and told us to read the notes, and also made good portion of the material optional. Spent about 6-7 hours/wk for optimization and 3-4 hours/wk for bandits.
Overall Rating (3.6 / 5): ★★★★☆
Professor Rating (3.6 / 5): ★★★★☆
Lecture Rating (2.9 / 5): ★★★☆☆
Difficulty (3.6 / 5):
Workload: 5 hours/week
Pros:
1. Serverless and Container lectures were super interesting
2. Groups of 3 was nice
3. TA's were responsive
Cons:
1. Projects were hard and lectures didn't help with them
2. Some of the lectures were out of date (although he mentions this)
3. TAs tended to give overly vague guidance which didn't help
Detailed Review:
I really really hope they refactor the projects -- throughout the semester, you're working on an Academic VM which is not really well-written and there's some sloppy comments that guide you but for the most part you're on your own to figure it out.
It was really cool to learn more about serverless and containers; I can talk intelligently about those now. Really hope one of the projects is tailored towards those sometime in the future because it would be nice to get some hands on practice with them.
I also think that some of the papers we looked over are really interesting but it's been years since they were published; let's see where those studies are at now? Basically, a new batch of videos per semester would make sense, to me.
I did email the professor with those recommendations so let's see what happens.