top of page

Interview with Dr. Robert J. Marks II

Updated: Sep 20, 2020

📷 Daniel Reeves/Evolution News

Journalist: Jeryn Anthonypillai

Jeryn: Hello and welcome to SciSection. My name is Jeryn Anthonypillai and I'm a journalist for SciSection radio show broadcast on the CFM 93.3 radio station. We are here today with Dr. Robert J. Marks II, the Director and a Senior Fellow of the Walter Bradley Center for Natural & Artificial Intelligence. Thanks so much for joining us Dr.Marks.

Dr. Marks: Oh, you're very welcome. Thank you Jeryn for inviting me.

Jeryn: So to begin, would you like to give a quick overview of the research topics you have done over the years?

Dr. Marks: Damn, I'm kind of an old guy, so I've had a lot of research topics. I believe that my major focus of the last 30 years, at least the common denominator has been in the area of artificial intelligence. I did a lot of work in neural networks. I was the editor in chief of the IEEE transactions of neural networks. I was the president of the IEEE neural network council, and I've published a lot. I got hundreds of publications and a few of them are actually good and I'm still working hard right now. I'm directing the Walter Bradley center, it's the Walter Bradley Center for Natural and Artificial Intelligence. And one of our purposes is to bring a little bit of sanity to the discussion of artificial intelligence, to try to diffuse some of the hype, which is currently surrounding artificial intelligence and the media and other places.

Jeryn: For sure, so what made you interested in pursuing research on artificial intelligence?

Dr. Marks: Well, initially it was, my goodness computers can do, can be intelligent. And I used to think that, yeah, they could be because the intelligence comes from our brain and if there's this materialistic brain up there, then we might be able to simulate it by a computer. And my opinion has changed since then. I no longer believe that it is impossible to duplicate humanity with artificial intelligence and I'm at odds with some people on that. But I think I have some very good defensible ideas. So right now, I believe that the mind is different from the brain and that we are fearfully and wonderfully made as it says somewhere. And those who would try to duplicate humanity using artificial intelligence will never succeed. They won't succeed now. And I think there's good reasons why they don't won't succeed in the future.

Jeryn: Yes, of course. So this topic is very interesting. And what do you think was the most fascinating thing you have discovered?

Dr. Marks: Well, I think that, I don't know if I've discovered it, but I'm certainly a proponent of it. And that is that artificial intelligence itself will never be creative. Artificial intelligence has yet to prove itself to be creative. Now we have to be careful here because we have to define what we mean by creative, but with the proper definition, no, it's never been creative artificial intelligence doesn't understand what it does and it will never be sent yet. Let me give you an example. For example, if you would like, if you bite into a lemon, you have an experience, you have the aroma of a lemon, you have the taste of the sourness, you kind of wince and you have an experience. Now try to communicate that experience to a person who has no sense of sight. Not sight he can see, but he has no sense of smell, no sense of taste since birth. And you want to explain the experience to them. You can explain, yeah, it was sour. He has no idea what sour is. You can explain to him the chemicals that are happening. You can explain to them the Newtonian mechanics of the bite, but communicating to him, the literal experience that you had in biting that lemon is not, it's not possible. That's a human attribute called qualia. And if you can't explain your experience of biting a lemon to another human being, how are you going to program a computer to duplicate that experience in a computer? You can't. So all of human experiential things, which follow, which lie under the umbrella of qualia, such as pain. And I would assume emotions, you know, depression elation are things which are not going to be able to ever be explained or programmed into a computer. These are things which are above and beyond the capability of artificial intelligence. And as I mentioned, creativity and understanding are likewise beyond the reach of artificial intelligence. So I think this is probably some of the most interesting conclusions that I've reached over the years in studying this also so far, computers have no common sense. AI has no common sense. There's an old story about Fred Flintstone getting his fingers glued in the bowling ball. And Barney Rubble goes gets a hammer, and he brings the hammer back. Cause Fred can't get his fingers out of the bowling ball. And Barney Rubble said, okay. Fred said okay, Barney, when I nod my head, hit it. Now the question is we know immediately what Fred Flintstone was referring to the bowling ball, but of course the joke is that it could also refer to Fred's head. We know immediately as people, but currently, artificial intelligence doesn't know which one it refers to. And they just might hit Fred on the head with a hammer as did, by Barney in the cartoon. But that was for comic reasons. But AI has no common sense either.

Jeryn: For sure. So AI is definitely becoming a very prominent topic in our world, as you mentioned, but some students might not know much about it. So how would you go about explaining AI to students who might be listening right now that don't have much knowledge on this topic?

Dr. Marks: I think the main aspect of artificial intelligence is learning by experience. The ideas like Pavlov's dogs, you show Pavlov's dogs, food and you ring a bell, and then it begins to associate ringing the bell with the food. You can also associate it with training a dog. You want your dog to bring you the newspaper. Well, it brings you a box of crackers and you'll whack it on the head with the newspaper. And then it goes back and it brings you I don't know, a carton of milk and you whack it on the head, you say no. When it brings you the paper, you reward it. And that's the way that you train a dog. And that's a lot like artificial neural networks are trained. You can train an artificial neural network to differentiate, for example, between well, cats and dogs. You show a neural network, a picture of a cat, and you say, this is a cat. The neural network gets it wrong. You whack it in some sense, but it's a, it's an algorithmic whacking that lets it adjust itself. So that next time it sees that picture. It knows, it knows better what to classify it. Then you show it a picture of a dog and a cat and a dog and a cat and a dog and a dog and a cat. And pretty soon it's able to differentiate between the cats and dogs. So it's an iterative process. It's learning, sometimes this learning takes a heck of a long time, but once it's trained, the neural network has the capability of differentiating cats and dogs. And I would say that this idea of neural network learning as one of the main, main focuses of current artificial intelligence research as it's something else called reinforcement learning. And those are what I would say would be the two big driving areas now related to that is data mining and natural word processing. How does one go into a big database? And how does one mind the information and a database in order to get some sort of answer? We saw this a long time ago with IBM Watson participating in the quiz game of jeopardy and IBM Watson, the computer was able to beat the world champion Jeopardy players because it had access to a great big room full of data, including probably all of Wikipedia plus some. And it was able to access this data in response to queries in order to get in order to beat the world champion Jeopardy champions in the game of jeopardy. So that's another aspect. So there's a number of different aspects. What is also interesting is the application of artificial intelligence. And I submit that we have artificial intelligence around us all the time, and we are numbed by familiarity. We are so used to this artificial intelligence that we no longer are excited about it. I would say this is especially true of younger generations. Things like Google maps, for example, still blow my mind as does Spotify as does Alexa. All of these things are just, just incredible Alexa with its automatic speech recognition still does a terrible job. Have you ever used Alexa? It can be terrible. You can ask him what to do. I ask it to play a song. What was this? This morning, I asked him to play a song by an old blues singer called Howlin’ Wolf. And it's it brought up this stupid band that was off-key. And I kept yelling at it to play, no Howlin’ Wolf like the blues player and it kept coming up with the same thing. So they still have a long way to go. But nevertheless, I'm still in awe of things such as Alexa. And we're going to see more jaw-dropping things coming from artificial intelligence in the future.

Jeryn: Some people who might have not heard of AI before tend to get worried due to the fact that AI is such a grand topic. Do you think there are any dangers of AI?

Dr. Marks: Well, I'll tell you what the biggest danger is. The biggest danger of AI is that if you designed AI and it does something you didn't intend it to do, a lot of people think you know the danger of AI is going to be like in the movie, the Terminator where AI takes over the world and begin zapping people or the matrix where you wake up in a big vat of goo, and you find out you've been living in a world of virtual reality. No that's never going to happen. AI is never going to do that because again, AI can never be creative. It can never be understood. It can never be sent yet. It can never set its own goals. That's all determined by the computer program. So, that isn't gonna happen. So put those worries aside, what I think the biggest danger is AI doing things it wasn't intended to do. We mentioned Alexa, the fact you ask it to play an old Howling Wolf song and it plays some stupid band. Well, that's a mistake you can live with, but there was a self-driving car. I believe it was 2018 that was sponsored by Uber that killed a pedestrian. That was an example of an unintended consequence of artificial intelligence, which was not intended by the designers and mitigating. This sort of thing is the responsibility of the programmers and the users of the artificial intelligence. I'm sorry, not the users, but the testers. Once you develop artificial intelligence, you have to test it. You have to make sure it works in all the different scenarios that's supposed to work. And if you have potentially dangerous artificial intelligence, this becomes very difficult, especially for complex artificial intelligence, like self driving cars. It's pretty complex because it turns out these unintended contingencies grow exponentially with the linear increase in the complexity of the system. And so as we try to get more and more complex artificial intelligence systems, the more difficult it is, the more difficult it's going to be to assure that the artificial intelligence system works as it was intended to.

Jeryn: I feel like your research definitely addresses that as well. And you were awarded multiple awards and listed as one of the most influential scientists in the world for your work, having a very accomplished career. What do you think you did differently compared to your peers that helped you become who you are today?

Dr. Marks: You know this is funny because when I made that list, I was called by the news, the newspaper, and I resurface an old Jack Benny joke. I say you know, I don't deserve to be on this list, but I have lower back pain. And I also don't deserve that. So I thought yeah, I could go on that list. So yeah, I did make the list. What was your question about it?

Jeryn: So what do you think you did differently compared to your peers that helped you become who you are today?

Dr. Marks: Well, I think one of the things is don't go with the flow. Don't go with a consensus I got into trouble with at Baylor. For example, for questioning Darwinism of all simple things, simply pointing out that there were some inconsistencies with the theory and this cost me big. My website was shut down. I got quite a bit of notoriety from it. I was actually in a movie. I was in the movie Expelled, No Intelligence Allowed starring Ben Stein. And it starred a bunch of people like myself that had kind of come out against the grain in things, the grain of ongoing science, unfortunately, consensus is not science. And the great strides and science come from questioning, questioning, questioning. And if you come up with something that is contrary to an established theory, and you are bold enough to state that, even though it's going to irritate some of the people in the status quo, I think that that is something which needs to be done. And I think that's something that I've done at a personal cost sometimes, but it's something which needs to be done.

Dr. Marks: And I think all the great breakthroughs in science and other places are made by people that are bold enough to point out the errors.

Jeryn: For sure. And for our final question, what kind of advice would you like to give someone who is pursuing research on a topic this big and undetermined like AI?

Dr. Marks: Well, this is very interesting. It depends on your goal. If your goal is to get a graduate degree, I think it's a great field. If you come into a graduate school or even undergraduate, but graduate school specifically, where what you want to do is research. It's always easier to climb a very, a very small hill in order to make your contribution. If you get a Ph.D., for example, you need to add to the knowledge of mankind. And in order to do that, you have to climb a mountain and empty your pail of your contribution on top to make the mountain taller. If you get involved into an area which is not well developed, and I would say that there's lots of things in artificial intelligence, which need to be done. The Hill that you have to climb is not as tall, and it's easier to make a contribution that way. I would also caution people that want to go into this field, make sure you're called to it. I'm a nerd, okay. God gifted me with being a nerd. I enjoy research. I enjoy mathematics. I enjoy being in the Academy but that is my calling. And I have seen people that have callings elsewhere. My wife, for example, doesn't appreciate, doesn't have the appreciation I do, but there's things she does. I don't have an appreciation for either. She can see a beautiful field of roses and say, oh my goodness, I see creation. I see God here. I don't. I see a bunch of roses. I get excited about some good mathematics and new technologies and new little gadgets. So that should be at the core of who you are before you develop a career in the specific area.

Jeryn: That was a wonderful piece of advice to give to the students that are listening. And that brings us to the end of the interview. Thank you again for joining me today. Make sure to check our podcasts available on global platforms.

Dr. Marks: Well, Jeryn, great. I hope we meet sometime. You sound like a fascinating young lady. You take care now.


bottom of page