I suspect the ODK community is a good one to ask about this, as there will be a mix of computer scientists and intelligent experts from many different fields here. I am an ecologist, not a computer scientist, but I am highly interested in computer science. I am also interested in philosophy and cultural trends, and I have felt a sense of awe over the rapid speed at which machine learning has developed in recent years.
My tech-philosophy question for everyone is this: do you think the rapid progress in all that is possible with brute force algorithms and machine learning is a help or a hindrance to the creation of actually conscious artificial intelligence? Also, if you had to make a guess about how many years it will take humans (assuming it is possible) to develop actually conscious AI, how many years would you guess?
I'm no neuroscientist, so I'm not sure that "actually conscious" is the correct term to use. What I mean is artificial intelligence that is self-aware. Instead of just calculating numbers, chess moves, linguistic responses, or automobile navigation through brute force algorithms like a fancy adding machine, it would be aware that it is doing so. In the same way a human can think, "Yes, I am eating blueberries" as they/she/he eats blueberries, the AI would think to itself, "Yes, I am making this delicious move to capture this unsuspecting bloke's pawn, mwuahahaha!"