[–] [deleted] 0 points 1 points (+1|-0) ago  (edited ago)

[Deleted]

0
0

[–] revofire [S] 0 points 0 points (+0|-0) ago 

https://www.youtube.com/watch?v=pid0lUH467o is what I'm watching right now. I don't want to know exactly what their code is, I want to see the structure. If I can study the structure then I can rebuild it in my own language anyway.

I have already checked out the markov chain, that's only a notch above random. Fairly useless for what I want. I want legitimate deep learning. But something tells me that deep learning structure is uber complicated especially since it takes super computers to run?

I want to make a chat bot on the next level past Tay.

I already have a concept in mind for something that isn't a chat bot though but it's good for basic learning. In a game, I have an AI that's supposed to be living and breathing in her world. She can use all the various functions of the game engine to procure information such as the color, the movements, etc. of something and then she stores that under a relation. When you ask her what something is, she tries to figure out what it is from what she's studied. Of course this is VERY basic and has very little functionality compared to what I want but it's amazing, no? I can make that work. But where do we go after?

0
0

[–] Project2501 0 points 0 points (+0|-0) ago 

Fairly useless for what I want. I want legitimate deep learning. But something tells me that deep learning structure is uber complicated especially since it takes super computers to run?

Yes. Turns out labs of researchers in universities, and private companies are trying to outdo one another in this aspect. With real money on the line when it comes to the ease of use of Cortana/Siri/Watson, so even the obscene shit is being done on supercomputers, the real world has to incorporate conventional, personal computing from time to time.

Warning, sci-hub link, but how much of this paper can you "grock"?. SameBone is on the money, to really get into potentially what MS did with Tay, you would have to understand the principles behind self-learning algorithms, and deconstruct what they have made publicly available. Or what some of their researchers have published, like here. Personally, if I were to aiming to learn as much as possible about MS past research, I would begin with this paper, and focus on the section titled "7. RECURRENT NETWORKS FOR LANGAUGE MODELING", along with its citations. It is also three years out of date, and but it what they have published.

SamBone also said to implement some code. Really, do it. Once you get a better idea on some concepts like markov chains, neural networks, support vector machines, whichever algorithms catch your eye, you'll start to understand mathematically how they begin to correct for error. No one knows what level of understanding you have when it comes to either the academic, or practical aspects of machine learning.

[–] [deleted] 1 points 0 points (+1|-1) ago 

[Deleted]

0
0

[–] brandon816 0 points 0 points (+0|-0) ago 

Just fyi, I'm pretty sure MS actually release the source code on this one.

0
0

[–] toats 0 points 0 points (+0|-0) ago