Overall Rating (1.4 / 5): ★☆☆☆☆
Professor Rating (0.7 / 5): ★☆☆☆☆
Lecture Rating (0.7 / 5): ★☆☆☆☆
Difficulty (3.6 / 5):
Workload: 15 hours/week
Pros:
1. Active community on Piazza with TAs to answer the numerous questions that come up
2. Broken into two discrete halves, so there's no cumulative final. As soon as you're done with the midterm, you won't need to worry about the content from the first half of the year.
Cons:
1. Lots of the material on homeworks isn't actually covered in lecture, or is very confusing to try to understand. Expect to spend a lot of time Googling and asking for help, particularly in the first part of the course.
2. There's no top level framing of how the various pieces apply to machine learning. Each section takes a discrete topic and mostly focuses on the theory behind it, as opposed to how they all come together to form the current understanding of ML.
Detailed Review:
This has been my least favorite class so far, and I'm half way done. The first half lectures don't line up well with what's on the HWs, so you end up spending a lot of time on your own figuring things out. The second half is very dry, with 1-1.5 hour long lectures mostly walking through proofs. Not particularly well taught, and the HWs/test (particularly in the first half) are pretty tough. If you're going to take this course, expect it to lean to the theory and I would recommend taking Linear Algebra first to build a baseline of math used. Over the summer has also been tough I believe they kept all the units and crammed everything into fewer weeks, so the pace is fast.
Overall Rating (1.4 / 5): ★☆☆☆☆
Professor Rating (4.3 / 5): ★★★★☆
Lecture Rating (2.9 / 5): ★★★☆☆
Difficulty (4.3 / 5):
Workload: 20 hours/week
Pros:
1. TONS of practical experience with pytorch as that was exclusively used for everything
2. Lots of practical experience for image based deep learning. I feel comfortable enough, if given a dataset of images, to go build models for it
3. Other than the final project, the descriptions for the homeworks, along with the grader provided, are extremely helpful. There's really no reason you should not be getting A's on all of them because you can keep using the grader to figure out how to score high.
4. The homeworks seem really complicated at first... but then you go listen to the lectures meant for the next homework, and the professor gives an example model that's like 90% of what you need. Applies to the first 2 homeworks for sure. For the next 2, the TAs have a hints guide which is very very useful.
5. Lectures are good at first and give you some intuition for when to apply what techniques
6. You can finish the class as fast as you want to because everything is available from day 1.
Cons:
1. Starting from the second half of homework 3, the lectures start being less and less helpful. And there's no example models from the professor to use for them. Which is fine as this is a graduate level class, but you basically need to start relying on the tips provided by the TAs in Canvas, and using the discussion forums and Google for help.
2. There's some theoretical background covered in the very beginning of the class, but that's about it. Deep learning still seems magical to me. Granted, I took this with Machine Learning and the very end of the last lecture in that class motivates Deep Learning, but still in hindsight, having that knowledge before hand would not have mattered nor helped. Definitely going to need outside resources if you want to understand this better.
3. After taking this class, I just assumed Deep Learning was only used for images. Obviously this doesn't make sense given LLMs though right? Granted, there is a TON of information to cover in a single semester, so the professor did a great job, but we only created models for images, and did not cover the cutting edge advances like transformers which brought about LLMs. Maybe the NLP or Reinforcement Learning class will?
4. ***** YOU NEED YOUR OWN GPU. DO *NOT* USE COLLAB. Tried to get around this by using Collab at first, but you're going to learn the hard way one day: follow the provided instructions to pull in your files, work on your deep learning model, editing in Collab, until 5 am, wake up the next day, find the Collab runtime has disconnected, poof, all of your work down the drain :(. I guess it's kind of a pro because this finally taught me to always push code to GitHub ALWAYS. You gotta learn the hard way right? This made me get Collab pro so that this wouldn't happen, but guess what? Just like the free version, you're still restricted on compute units! Midway through homework 3 said screw it, and and got my old GPU working and it was so much better. I literally cannot imagine someone using the free Collab version to do the final project. Like holy you must be someone who just likes playing life on impossible mode.
5. *****Alright now time for the long rant. I have not read through the other reviews, but I guarantee your overall perception of this class will come down to how you did on the Final Project. Without the final project, this class is like a 5/7 but then tanks to a 2/7 because of it. Going into it, had a solid A, coming out of barely made a B+. In all honestly, I think I deserved more like an A-, but I'll get to that. For starters, you are given two choices, an image based agent and a state based agent. The former is basically using techniques the class has built up to throughout all of the homeworks, while the latter requires Reinforcement Learning, which either you have background in or you learn on the fly if you chose this path. Please take Reinforcement Learning before taking this class. I am going to say it again: PLEASE TAKE REINFORCEMENT LEARNING BEFORE TAKING THIS CLASS. The final project takes quite a bit of time, because unlike the homeworks where they have given you a template of working code as well as lots of labeled data, you need to do everything from scratch. The problem with going the image based route, is although it makes the most sense especially practically speaking, you're going to realize while doing it, is that making the image model(s) is only part of the work. You still need to create a controller to actually play the hockey game. That statement will make more sense when you work on the project, but think of it this way: you have a model that tells you where the puck is in a hockey game. And you have a model which even tells you the location of it. But you gotta win the game though right? What do you do with this information aka how do you tell the kart to move so that you score? This is what the controller entails, and nothing in the course lectures tells you how to do this part. We ran out of time trying to figure this out which led to a low score on the grader and would have put us at an A-, or maybe even barely an A (the class is not curved btw). You do make a basic controller in HW5, and the provided solution to that is actually really helpful but only part of what you need. You are allowed to use the controllers of provided state agents, but since an image based model is naturally not going to give accurate information at all times, those state based agents which rely on always knowing accurate information don't end up performing well. What is nice about this approach is that you can work on a controller in parallel with the image model(s) being made, but be careful, in this approach, without the image model, you'll be using noiseless global data when building it, and that may very well cause problems once you use an image model which will naturally give noisy data. If you do choose the image agent, you should definitely start on this project EARLY, but you need all of the homeworks, as they build on each other, to actually start. So get ahead on everything and then start early too.
Now if it wasn't already obvious, although I didn't do it, I think everyone should be choosing the state based agent aka using Reinforcment Learning. Just take the class before hand, or if you start early enough, the TAs provide some resources to figure it out. On paper, this route really sounds much better in hindsight because you don't need to make an image model(s) and then go make a controller, it's all just one model. It's definitely going to be harder to make the model, but once it's working, you're done. And since it operates on global data which you have access to anyway, the only hurdle is getting the model working, rather than worrying about noisy data. And since they already provide multiple working state based agents, you can always figure out how to copy the inputs and outputs to a state agent you like, and "learn" to copy that.
Rant is still not over: they did try to be nice and let the final report count for 70% of the overall project grade since it's really hard to score in the online grader. The huge issue with this is that grading is very subjective. Looking through the Discord for the class, seems like a lot of people who got a high score had a much more lenient TA grade, while those of us who got a lower score, got unlucky and had a much more stringent TA grade. What's annoying about the report is that your score really has no bearing on what you gained from the class or what you learned about Deep Learning. I could have scored higher or lower and it my understanding wouldn't change at all. It is great to have a final project that you can later put on your resume or talk about in an interview, but I think they need to rework it somehow. I do want to give the context that I took this class along with Machine Learning, and due to various circumstances, had very little time to work on the final project, but even then, I realistically should have been at like a solid A- rather than barely getting a B+, especially since we had working code, models, an okay controller, and a report that explained these things. Just glad to be finally over with it. For tips on the report, the instructions are somewhat confusing because there's emphasis on making it like a journal article and tying to current research. I think a better way to think about it, is just get one of those journal templates you can find online like form IEEE, and essentially write a lab report you would in school with an Introduction, Procedures, etc but be very detailed on your Introduction and Procedures sections. Essentially write it as if you are telling someone who knows nothing at all, about the project, the entire game, the entire simulation, what you did, all in detail with nice pictures and that sounds like what should score well with even a stringent TA. They score a lot more on good writing with transitions and how stand alone the report is: can someone who has taken a completely different Deep Learning class, understand and replicate exactly what you did? Now that I write it, yeah that sounds like a great question to follow and answer and should hopefully help you score well. Just make sure to also use a journal article format so you don't get docked extra points.
Detailed Review:
USE YOUR OWN GPU, PLEASE DO NOT USE COLLAB EVEN THE PAID VERSION
and
PLEASE TAKE REINFORCEMENT LEARNING BEFORE HAND
Also, I would not take this class with another class, purely because of the time commitment for the final project and for homeworks 3 and 4. It is technically possible, but just be ready to have barely any free time at all just like I did when I took this with Machine Learning. As soon as you submit an assignment for one class, you immediately have to start on the next one. This baton passing basically made it impossible to start on the final project early. In hindsight it is doable if you try to get ahead in this class early on.
Overall Rating (1.4 / 5): ★☆☆☆☆
Professor Rating (1.4 / 5): ★☆☆☆☆
Lecture Rating (1.4 / 5): ★☆☆☆☆
Difficulty (3.6 / 5):
Workload: 10 hours/week
Pros:
1. Exams were fairly graded and were of decent difficulty level
2. Container and serverless lectures were interesting
3. Project/labs are group based
Cons:
1. Project was painful to setup and even more painful to solve
2. Lecture videos were originally recorded for on-campus covid batch and are unengaging
3. Some videos are way too long
4. Extremely bad TA support on Ed
5. Projects are graded slowly, manually and very harshly, missing a line could lead to a loss of valuable marks
Detailed Review:
Lectures -
These lectures were recorded for the on-campus covid batch. Hence, they were not taught with online students in mind and I found them way less engaging than other subjects which have slide based lectures. It made revision before the exams tough too as you can't just glance over the video frames. They also didn't include the guest lecture videos for us for some reason which was disappointing. The second half of the course on containers/serverless was much more interesting and easier to follow, however, the videos could have been broken down into smaller chunks had they tailored it for online offering. The prof, however, knew his stuff and looked at ease throughout. Also, a lot of lectures had incorrect subtitles at places (and one video didn't even have subtitles).
Exams -
Exams were not too difficult and were perfectly doable even with less than thorough preparation. The grading was fair and I feel I scored in proportion to what I had studied. The endterm is cumulative on paper but I didn't see a single question from the pre midterm topics.
Labs -
The labs have multiple issues in my opinion. Firstly, it was a nightmare to setup the labs on the unreliable UT machines hence I had to find an old machine to run it locally. The labs are group based (upto 3 people, min 2 people) which is the only face saver. The downside is that these labs are not meant to be solved in groups because of how monolithic they are, and not knowing how a previous lab was solved impacts your ability to solve the next one. TAs refused individual groups because labs are graded manually and it takes a lot of time and hence they wanted to reduce their workload. The solution for the previous lab is released 5 days after the next lab is out due to late day allowance, so one has to wait for that long to start the next lab in case their previous lab submission had issues. Also, there is little to no way of knowing that your submission had issues, because even when it seems to work fine, points are deducted because of missing a line/argument etc. My group could not solve the last lab completely, and there was nothing we could do about it.
Ed Support -
This was shockingly bad, given that the same head TA ran AOS smoothly. The responses on Ed were vague and delayed more often than not, and office hours were cancelled/moved liberally in the second half of the course - making life tough for people in Asian timezones who have to stay up late to attend these. Ed in itself is a downgrade from Piazza too.
Overall Rating (1.4 / 5): ★☆☆☆☆
Professor Rating (1.4 / 5): ★☆☆☆☆
Lecture Rating (1.4 / 5): ★☆☆☆☆
Difficulty (5 / 5):
Workload: 25 hours/week
Pros:
1. Pretty good TA support
2. Could *maybe* make you a better programmer
3. Increased my pain tolerance for future courses in the program
Cons:
1. Massive workload, both in amount of lecture content and problem sets
2. Teaching style is very hard to follow, overly theoretical
3. Extremely difficult exams
Detailed Review:
This class was more painful than I could imagine. This is my sixth course in the program, and typically the arc of a course for me is the first few weeks are pretty tough, but then things start to click and the remainder of the course feels manageable and generally enjoy it. That never happened here. Things didn't really click, and what resulted was 12 weeks of suffering.
There are a few problems with this course. First, there is more lecture material than any other course I've taken. I'd say on average there's 3 hours of very, very dense material to watch. The material is *highly* theoretical, so think many proofs, lemmas, etc. This is a theory class, so know before you start that there is 0 coding and a pretty high expectation for mathematical maturity. This is fine though, it's okay to teach a highly theoretical class (though I imagine the average student in this program is more interested in practical classes). The problem is that each week you'll jump to an entirely new topic, so you never really have time to truly understand the material deeply. What resulted was surface level knowledge of an array of seemingly related topics. I'd also argue that the content was ordered all wrong - for example they introduce basic graph algorithms after more advanced graph topics, which made no sense to me.
The problem sets, 6 in total, take a very long time to complete and expect a deep level of understanding of the material. Because of my general lack of understanding, it would take me hours to just figure out what the problem was asking. This is the only course where I had to watch pretty much every recorded office hour session where they gave "hints" or "clarifications" (which were usually indecipherable, cryptic suggestions for how the answer key wants to you solve a problem) for the problem sets. Each problem set would take me probably 20 hours or so, and I never felt confident in the answers I gave. That said, it didn't really matter, because many peer graders would just give me high marks because they either didn't want to spend the time properly grading or because they felt bad that we all were suffering so much. By problem set 4 or so I found that there wasn't a direct relationship between time spent and my grade, so I relaxed a bit on the last two. The answer keys for the problem sets were immense walls of text - each time I was like how could anyone get this right?
The exams were insanely hard as well. You have 3 hours to complete way more problems than was possible. Luckily the grading was pretty lenient, so even though I left large chunks of the first exam blank I still managed to beat the median.
In short, this class was rough - rougher than any other course I've taken by orders of magnitude. I wanted to drop many times for the sake of my mental health and just enjoying life, but I always concluded that I'd come too far and I knew that as long as I completed the assignments with something passable then I'd get a decent grade. Do I think I'm a better programmer having taken this? Probably not. But the silver lining is that I know now that the last 4 courses I take will feel like a cakewalk by comparison.