Dr. Bill.TV #459 – Audio – The Cut the Cord, Scan Your Channels Edition!

A quick update on the FCC required ‘re-pack’ of TV station frequencies, if you have cut the cord, you really SHOULD re-scan your local OTA (Over-the-Air) channels NOW!

Links that pertain to this Netcast:

TechPodcasts Network

International Association of Internet Broadcasters

Blubrry Network

Dr. Bill Bailey.NET

BitChute Referral

www.DrBill.TV/VPN


Start the Video Netcast in the Blubrry Video Player above by
clicking on the “Play” Button in the center of the screen.

(Click on the buttons below to Stream the Netcast in your “format of choice”)








Streaming MP3 Audio

Streaming Ogg Audio

Download M4V Download WebM Download MP3 Download Ogg
(Right-Click on any link above, and select “Save As…” to save the Netcast on your PC.)

You may also watch the Dr. Bill.TV Show on these services!

 

Dr. Bill.TV on YouTube Dr. Bill.TV on Vimeo

 


Dr. Bill.TV #458 – Video – The Tech Gone Bad Edition!

Frustrating Voicemail Scam, Google’s control of search, the case for the DuckDuckGo Search Engine, use Social Media, don’t let it use YOU! Google’s search AI algorithm can cause problems, is Google “shaping” what is considered ‘fair’ on-line?

Links that pertain to this Netcast:

TechPodcasts Network

International Association of Internet Broadcasters

Blubrry Network

Dr. Bill Bailey.NET

BitChute Referral

www.DrBill.TV/VPN


Start the Video Netcast in the Blubrry Video Player above by
clicking on the “Play” Button in the center of the screen.

(Click on the buttons below to Stream the Netcast in your “format of choice”)








Streaming MP3 Audio

Streaming Ogg Audio

Download M4V Download WebM Download MP3 Download Ogg
(Right-Click on any link above, and select “Save As…” to save the Netcast on your PC.)

You may also watch the Dr. Bill.TV Show on these services!

 

Dr. Bill.TV on YouTube Dr. Bill.TV on Vimeo

 


Dr. Bill.TV #458 – Audio – The Tech Gone Bad Edition!

Frustrating Voicemail Scam, Google’s control of search, the case for the DuckDuckGo Search Engine, use Social Media, don’t let it use YOU! Google’s search AI algorithm can cause problems, is Google “shaping” what is considered ‘fair’ on-line?

Links that pertain to this Netcast:

TechPodcasts Network

International Association of Internet Broadcasters

Blubrry Network

Dr. Bill Bailey.NET

BitChute Referral

www.DrBill.TV/VPN


Start the Video Netcast in the Blubrry Video Player above by
clicking on the “Play” Button in the center of the screen.

(Click on the buttons below to Stream the Netcast in your “format of choice”)








Streaming MP3 Audio

Streaming Ogg Audio

Download M4V Download WebM Download MP3 Download Ogg
(Right-Click on any link above, and select “Save As…” to save the Netcast on your PC.)

You may also watch the Dr. Bill.TV Show on these services!

 

Dr. Bill.TV on YouTube Dr. Bill.TV on Vimeo

 


Is Google “Shaping” What is Considered “Fair?”

Uneven ScalesInsider Blows Whistle and Exec Reveals Google Plan to Prevent “Trump Situation” in 2020 on Hidden Cam

This video is on BitChute, because it can’t be on Google’s service, YouTube, obviously. The idea is, Google, inside the walls of their offices, call something “fair” ONLY if it matches their OWN political opinion. Whether you support President Trump or not, you should find this VERY disturbing. Next time it may be YOUR candidate, or cause, that is not “fair” according to those in powers’ opinion.

This organization is a very “right” leaning organization called “Project Veritas,” which stands for “Project Truth.” They use hidden camera videos to expose the “internals” of many organizations, like Planned Parenthood. Now, they are exposing Google and it’s alleged left-leaning political agenda.

Again, you may be “left leaning” yourself, and believe this is all great! But, the shoe can always end up on the other foot! This is why we must be FREE to express all opinions. Freedom of speech is just that, freedom! Then, you make up your OWN mind about what is correct, and truthful.

Google’s AI Can Cause Problems

User Opinion“AI,” or “Artificial Intelligence” drives Google products, like YouTube, but it has issues! Are “highly engaged,” and therefore, opinion-driven hyper-users, “shaping” the results of content?

The Toxic Potential of YouTube’s Feedback Loop

Wired – By: Guillaume Chaslot – “From 2010 TO 2011, I worked on YouTube’s artificial intelligence recommendation engine – the algorithm that directs what you see next based on your previous viewing habits and searches. One of my main tasks was to increase the amount of time people spent on YouTube. At the time, this pursuit seemed harmless. But nearly a decade later, I can see that our work had unintended – but not unpredictable – consequences. In some cases, the AI went terribly wrong.

Artificial intelligence controls a large part of how we consume information today. In YouTube’s case, users spend 700,000,000 hours each day watching videos recommended by the algorithm. Likewise, the recommendation engine for Facebook’s news feed drives around 950,000,000 hours of watch time per day.

In February, a YouTube user named Matt Watson found that the site’s recommendation algorithm was making it easier for pedophiles to connect and share child porn in the comments sections of certain videos. The discovery was horrifying for numerous reasons. Not only was YouTube monetizing these videos, its recommendation algorithm was actively pushing thousands of users toward suggestive videos of children.

When the news broke, Disney and Nestlé pulled their ads off the platform. YouTube removed thousands of videos and blocked commenting capabilities on many more.

Unfortunately, this wasn’t the first scandal to strike YouTube in recent years. The platform has promoted terrorist content, foreign state-sponsored propaganda, extreme hatred, softcore zoophilia, inappropriate kids content, and innumerable conspiracy theories.

Having worked on recommendation engines, I could have predicted that the AI would deliberately promote the harmful videos behind each of these scandals. How? By looking at the engagement metrics.

Anatomy of an AI Disaster

Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user – and users like them – to find and recommend other videos that they will engage with.

In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes – that is, the more data it has – the more efficient it will become at recommending specific user-targeted content.

Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.

But this incident is just a single example of a bigger issue.

How Hyper-Engaged Users Shape AI

Earlier this year, researchers at Google’s Deep Mind examined the impact of recommender systems, such as those used by YouTube and other platforms. They concluded that ‘feedback loops’ in recommendation systems can give rise to ‘echo chambers’ and ‘filter bubbles,’ which can narrow a user’s content exposure and ultimately shift their worldview.’

The model didn’t take into account how the recommendation system influences the kind of content that’s created. In the real world, AI, content creators, and users heavily influence one another. Because AI aims to maximize engagement, hyper-engaged users are seen as ‘models to be reproduced.’ AI algorithms will then favor the content of such users.

The feedback loop works like this: (1) People who spend more time on the platforms have a greater impact on recommendation systems. (2) The content they engage with will get more views/likes. (3) Content creators will notice and create more of it. (4) People will spend even more time on that content. That’s why it’s important to know who a platform’s hyper-engaged users are: They’re the ones we can examine in order to predict which direction the AI is tilting the world.

More generally, it’s important to examine the incentive structure underpinning the recommendation engine. The companies employing recommendation algorithms want users to engage with their platforms as much and as often as possible because it is in their business interests. It is sometimes in the interest of the user to stay on a platform as long as possible—when listening to music, for instance – but not always.

We know that misinformation, rumors, and salacious or divisive content drives significant engagement. Even if a user notices the deceptive nature of the content and flags it, that often happens only after they’ve engaged with it. By then, it’s too late; they have given a positive signal to the algorithm. Now that this content has been favored in some way, it gets boosted, which causes creators to upload more of it. Driven by AI algorithms incentivized to reinforce traits that are positive for engagement, more of that content filters into the recommendation systems. Moreover, as soon as the AI learns how it engaged one person, it can reproduce the same mechanism on thousands of users.

Even the best AI of the world—the systems written by resource-rich companies like YouTube and Facebook – can actively promote upsetting, false, and useless content in the pursuit of engagement. Users need to understand the basis of AI and view recommendation engines with caution. But such awareness should not fall solely on users.

In the past year, companies have become increasingly proactive: Both Facebook and YouTube announced they would start to detect and demote harmful content.

But if we want to avoid a future filled with divisiveness and disinformation, there’s much more work to be done. Users need to understand which AI algorithms are working for them, and which are working against them.”

Using Social Media, Not Letting It Use You!

Social MediaHave you ever thought about how you use social media? Do you have a daily habit of checking Facebook, or Twitter, or Instagram? Is this necessarily bad in and of itself? It doesn’t have to be, as long as it doesn’t become an addiction! You know that you are addicted if you have a “gnawing feeling” that you are missing out if you don’t check your social media accounts several times every day! Have there ever been several days in which you have not checked your social media? Maybe you got too busy, you had things going on in your life, perhaps or you just didn’t feel like you had the time to sit down and check your social media accounts. If so, you may be the exception and not the rule! It is surprising how many people can’t let a day go by without checking their social media!

This can be a form of addiction. Now I’m not necessarily saying that you will go into some kind of sweating, shaking withdrawal if you don’t check your social media accounts. But just that little “gnawing” in the back of your mind may be an indication that addiction is a possibility! The bottom line is, I want to use social media, but not let it use me!

Another problem to consider is that if you get all your information, or even a large portion of your information, from social media, are you thinking about it critically? I have mentioned in the last several articles the need for critical thinking. It is not enough to know that we need to think about what we see, what we read, and what we hear through media. We need to always stop and ask ourselves several questions. The key questions are: “Who wrote what I am reading?” Not who sent it. Not who posted it. Because that could have been a close friend, someone you know and trust. But, we need to ask, “Where did they get it?” Are they just mindlessly forwarding something they saw that caught their attention briefly but have no idea as to the source of the information. As an example, let’s say a close, trusted, friend posts an article that indicates that a celebrity has died. That celebrity is one of your favorite actors. You then post on your timeline how much you regret that actor’s death. Or, you simply share the article that your friend posted about the actor’s death. Then you find out that the actor in question is still alive! You feel kind of silly. That’s a fairly harmless example of the problem that were talking about. But what if what you read in various posts, even from trusted friends, about an issue that is even more important, or more critical? You could pass along information, the origin of which you did not know, the contents of which haven’t been confirmed. That piece of information could be read by someone that influences them to think a certain way, to perceive the world a certain way, and may even drive them to some form of action that you never thought of, or intended! Finally ask yourself, “Is someone trying to influence me with this information?” Don’t be a “lemming” that jumps off into the sea because all the other lemmings are jumping! Think for yourself!

You see the process that is troubling in this scenario. Blindly posting, or sharing, information can create situations that have dire consequences! This is a sad fact, but one that we need to take to heart. If you see something online, no matter the source, check out the facts for yourself. Don’t let someone influence you with a random post, check out the source, check out the motive, find out the facts for yourself!

The Case for DuckDuckGo!

DuckDuckGoMany people that are concerned about their online security and privacy while doing Internet searches are switching to an alternative search engine called “DuckDuckGo.” They are located at https://DuckDuckGo.com. On their promotion page, they ask that users of the service spread information about why their friends should use the DuckDuckGo search engine. They say that “Friends don’t let friends get tracked!” They also remind users that they need to tell their friends that Google tracks you, and DuckDuckGo does not. Search should remain private and should not be targeted by advertisers. DuckDuckGo actively blocks Google’s hidden trackers, and Google trackers work on 75% of the top million websites! They indicate that their unbiased results are outside of the “filter bubble” that is typified by Google.

DuckDuckGo is committed to unbiased search that’s never based on your search history, and they ask that you spread the word that we all should stand up for a pro-privacy business model in the field of Internet search. This would be a distinct alternative to Google’s “collect-it-all” business model! The gist of it is that no one else should own your data! It is your data and you need to protect it. This is the thinking behind DuckDuckGo. It is a privately held Internet company dedicated to empowering the user to take control over their personal information online without trade-offs.

Maybe it is time to consider this alternative to the all-powerful Google!

The Issue With Google’s Control of Search

Google: Don't Be Evil!As you know my buddy that I used to work with that I call “The Other Computer Curmudgeon” has been sending me all kinds of information about how Google is trying to take over the world! Now, of course, some people think he is a little crazy, but they think that about me as well, so it works out!

To that end, Politico magazine online had a story in 2015 entitled, “How Google Could Rig the 2016 election.” Now, of course, this is old news, as we are well past 2016, but I think that what it’s talking about could in fact be used to sway future elections as well. The author, Robert Epstein, writes in this article that he had been directing research into Google and its ability to control opinions and beliefs based on its search algorithms. What you search for and what the responses are of great importance in how you proceed to perceive an issue.

Google has the ability, perhaps more than any other company in history to control, or shift, voting preferences of undecided voters. Mr. Epstein indicates that in his view they would have virtually no knowledge that they’re being manipulated by the search results they see.

He then goes on to point out that because many elections are won by very small margins, this would give Google the power to flip upwards of 25% of national elections in counties worldwide. His example is that in the United States half of our presidential elections in the past have been won by margins under 7.6%.

The fact that this could be done without the knowledge of the people doing searches using the Google search engine, to me, is what is most insidious in this scenario. Because our school systems have done a very poor job in teaching people critical thinking for themselves; most people tend to just surf the web, do searches in the search engines, and assume that what they’re finding online IS accurate information. They don’t question the source, or the motivation, of the people that are posting such information on the Internet.

Because of this, there is a real problem that people will be swayed by what they find on the Google search engine, Facebook, Twitter, and even Instagram and Snapchat; that influences them without their even knowing that they have been influenced. There’s something about seeing an article written on a computer screen that tends to make people think it must be true! This is a sad fact, but all you have to do is critically look at a pharmaceutical advertisement on TV and what they are saying, as well as what they are NOT saying, that would give you an understanding, or pretext, about the drug they’re trying to sell you. Because make NO mistake, the reason they’re running that advertisement IS to sell you their latest drug! That’s just one example of the kind of influence that I’m talking about.

This is even more critical, in a venue where people make the assumption that if they look at a search engine the people providing the search engine have no “skin in the game” when it comes to providing a search response. That is, in fact, the way it should be, but unfortunately it is not the way it is! Google does have a political agenda. That’s why as my buddy, “The Other Computer Curmudgeon” says we need to be very careful of Google, and what it represents, in our searches, and how it impacts the direction of our thinking. The key here is to be a critical thinker! Think about what the person is saying, how they’re saying it, what their motivation behind it is in saying it; and how they are trying to influence you through what they’re writing, or presenting to you. If you don’t see opposing views, you could be swayed!

Frustrating Voicemail Scam

Warning - Scam Alert!I just posted this on Facebook:

“As most of you know, I run a tech show on YouTube called Dr. Bill.TV | The Computer Curmudgeon. I’m going to be talking about this on my show, but I wanted to put this on Facebook as well because it is SO frustrating! I have been hit twice today with a new phone scam that I have just been made aware of. Perhaps you’ve already had this happen, and if you haven’t it probably will soon! It begins on your cell phone with a voicemail message. The key here is that the phone NEVER rings. You are just notified that you have a voicemail waiting. When you listen to the voicemail you are told some story, the two I heard today were both different; and you are asked to call a telephone number, at which point they probably scam you to no end! I, of course, did not call the numbers that I was given and told to call. The correct response is simply to delete the phonemail, and go on about your business.

What is so frustrating here is that no call blocking software blocks this yet. I have several on my phone and neither stopped it. I understand that there is legislation being considered to stop this practice.

This is a case of tech development gone bad. They are developing nuisance tech to try to reach us with their scams, and this one is particularly insidious! More to come on my show about tech gone bad this weekend!”

Dr. Bill.TV #457 – Video – The Google Gets Worse Edition!

Roku/FireTV, the ‘big dogs’ in Cord Cutting, Google listens in on Google Assistant, Orville Season 3 on Hulu, VirtualBox adds UEFI Secure Boot for Linux, Linux loses the Floppy, Elon Musk implants links in the brain to computers, Cord Cutting Questions!

Links that pertain to this Netcast:

TechPodcasts Network

International Association of Internet Broadcasters

Blubrry Network

Dr. Bill Bailey.NET

BitChute Referral

www.DrBill.TV/VPN


Start the Video Netcast in the Blubrry Video Player above by
clicking on the “Play” Button in the center of the screen.

(Click on the buttons below to Stream the Netcast in your “format of choice”)








Streaming MP3 Audio

Streaming Ogg Audio

Download M4V Download WebM Download MP3 Download Ogg
(Right-Click on any link above, and select “Save As…” to save the Netcast on your PC.)

You may also watch the Dr. Bill.TV Show on these services!

 

Dr. Bill.TV on YouTube Dr. Bill.TV on Vimeo

 


1 32 33 34 35 36 394