GVKun编程网logo

在警报Selenium Webdriver上单击“确定”(警报已触发)

8

如果您对在警报SeleniumWebdriver上单击“确定”和警报已触发感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解在警报SeleniumWebdriver上单击“确定”的各种细节,并对警

如果您对在警报Selenium Webdriver上单击“确定”警报已触发感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解在警报Selenium Webdriver上单击“确定”的各种细节,并对警报已触发进行深入的分析,此外还有关于JavaSelenium Webdriver:修改navigator.webdriver标志以防止selenium检测、Selenium WebDriver 在导入 selenium 时不可调用错误但在不导入 selenium 时有效、Selenium WebDriver-Java-单击一个按钮、Selenium Webdriver-单击隐藏的元素的实用技巧。

本文目录一览:

在警报Selenium Webdriver上单击“确定”(警报已触发)

在警报Selenium Webdriver上单击“确定”(警报已触发)

好的,所以我知道关于webdriver警报的问题还有很多其他答案,我已经仔细研究了它们,但是我认为我的情况有所不同。当我单击“提交”按钮时,我已经切换到3帧,然后收到警报,因此我尝试切换回默认内容,然后使用try
catch和alert.accept单击警报,但仍然没有单击警报。代码如下。提前感谢你的帮助 :)

public class BookAHoliday {    public FirstPage completeHolidayFormAndSubmit(String firstDate, String lastDate) {        sleepsAreBad();        driver.switchTo().frame("ContainerFrame");        driver.switchTo().frame("iframeCommunityContainer");        driver.switchTo().frame("FORMCONTAINER");        fluentWait(By.id("StartDate_txtInput"));        firstDayOfLeaveInput.sendKeys(firstDate);        sleepsAreBad();        lastDayofLeaveInput.sendKeys(lastDate);        try {            submitButton.click();        } catch (UnhandledAlertException f) {            try {                sleepsAreBad();                driver.switchTo().defaultContent();                Alert alert = driver.switchTo().alert();                String alertText = alert.getText();                System.out.println("Alert data: " + alertText);                alert.accept();            } catch (NoAlertPresentException e) {                e.printStackTrace();            }        }        sleepsAreBad();        return PageFactory.initElements(driver, FirstPage.class);    }    private void sleepsAreBad() {        try {            Thread.sleep(5000);        } catch (InterruptedException e) {            e.printStackTrace();        }    }public class BaseTest {    public static WebDriver driver;    static String driverPath = "C:\\";    @BeforeClass    public static void setUp() {        System.out.println("****************");        System.out.println("launching Browser");        System.out.println("****************");        // Browser selection        //Firefox        DesiredCapabilities dc = new DesiredCapabilities();        dc.setCapability(CapabilityType.UNEXPECTED_ALERT_BEHAVIOUR, UnexpectedAlertBehaviour.IGNORE);        driver = new FirefoxDriver(dc);driver.get(URL); @AfterClass()    public static void tearDown() {        if (driver != null) {            System.out.println("Closing browser");            driver.quit();        }    }public class Bookings extends BaseTest{    @Test(description = "Holiday booking")    public void CD01() {        FirstPage firstPage = PageFactory.initElements(driver, FirstPage.class);        firstPage                 .logIn("username", "password")                .clickHolidayLink()                .completeHolidayFormAndSubmit("12/05/2016", "15/05/2016");    }

警报框这是警报框

答案1

小编典典

进入UnhandledAlertException后尝试一下catch

WebDriverWait wait = new WebDriverWait(driver, 3000);wait.until(ExpectedConditions.alertIsPresent());Alert alert = webDriver.switchTo().alert();alert.accept();

可能会帮助您… :)

JavaSelenium Webdriver:修改navigator.webdriver标志以防止selenium检测

JavaSelenium Webdriver:修改navigator.webdriver标志以防止selenium检测

我正在尝试使用selenium和铬在网站中自动化一个非常基本的任务,但是以某种方式网站会检测到铬是由selenium驱动的,并阻止每个请求。我怀疑该网站是否依赖像这样的公开DOM变量https://stackoverflow.com/a/41904453/648236来检测selenium驱动的浏览器。

我的问题是,有没有办法使navigator.webdriver标志为假?我愿意尝试修改后重新尝试编译selenium源,但是似乎无法在存储库中的任何地方找到NavigatorAutomationInformation源https://github.com/SeleniumHQ/selenium

任何帮助深表感谢

PS:我还从https://w3c.github.io/webdriver/#interface尝试了以下操作

Object.defineProperty(navigator, ''webdriver'', {    get: () => false,  });

但是它仅在初始页面加载后更新属性。我认为网站会在执行脚本之前检测到变量。

答案1

小编典典

从当前的实现开始,一种理想的访问网页而不被检测到的方法是使用ChromeOptions()该类向以下参数添加几个参数:

排除enable-automation开关的集合
关掉 useAutomationExtension
通过以下实例ChromeOptions

Java示例:

System.setProperty("webdriver.chrome.driver", "C:\\Utility\\BrowserDrivers\\chromedriver.exe");ChromeOptions options = new ChromeOptions();options.setExperimentalOption("excludeSwitches", Collections.singletonList("enable-automation"));options.setExperimentalOption("useAutomationExtension", false);WebDriver driver =  new ChromeDriver(options);driver.get("https://www.google.com/");

Python范例

from selenium import webdriveroptions = webdriver.ChromeOptions()options.add_experimental_option("excludeSwitches", ["enable-automation"])options.add_experimental_option(''useAutomationExtension'', False)driver = webdriver.Chrome(options=options, executable_path=r''C:\path\to\chromedriver.exe'')driver.get("https://www.google.com/")

Selenium WebDriver 在导入 selenium 时不可调用错误但在不导入 selenium 时有效

Selenium WebDriver 在导入 selenium 时不可调用错误但在不导入 selenium 时有效

如何解决Selenium WebDriver 在导入 selenium 时不可调用错误但在不导入 selenium 时有效?

我正在尝试抓取一些 LinkedIn 个人资料,但是,使用下面的代码,给了我一个错误:

错误:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-16-b6cfafdd5b52> in <module>
     25     #sending our driver as the driver to be used by srape_linkedin
     26     #you can also create driver options and pass it as an argument
---> 27     ps = ProfileScraper(cookie=myLI_AT_Key,scroll_increment=random.randint(10,50),scroll_pause=0.8 + random.uniform(0.8,1),driver=my_driver)  #changed name,default driver and scroll_pause time and scroll_increment made a little random
     28     print(''Currently scraping: '',link,''Time: '',datetime.Now())
     29     profile = ps.scrape(url=link)       #changed name

~\Anaconda3\lib\site-packages\scrape_linkedin\Scraper.py in __init__(self,cookie,scraperInstance,driver,driver_options,scroll_pause,scroll_increment,timeout)
     37 
     38         self.was_passed_instance = False
---> 39         self.driver = driver(**driver_options)
     40         self.scroll_pause = scroll_pause
     41         self.scroll_increment = scroll_increment

TypeError: ''WebDriver'' object is not callable

代码:

    from datetime import datetime
        from scrape_linkedin import ProfileScraper
        import random                       #new import made
        from selenium import webdriver  #new import made
        import pandas as pd
        import json
        import os
        import re
        import time
        
        os.chdir("C:\\Users\\MyUser\\DropBox\\linkedInScrapper\\")
        
my_profile_list = [''https://www.linkedin.com/in/williamhgates/'',''https://www.linkedin.com/in/christinelagarde/'',''https://www.linkedin.com/in/ursula-von-der-leyen/'']
        
        myLI_AT_Key = MyKey # you need to obtain one from Linkedin using these steps:
    
    # To get LI_AT key
    # Navigate to www.linkedin.com and log in
    # Open browser developer tools (Ctrl-Shift-I or right click -> inspect element)
    # Select the appropriate tab for your browser (Application on Chrome,Storage on Firefox)
    # Click the Cookies dropdown on the left-hand menu,and select the www.linkedin.com option
    # Find and copy the li_at value
        
        for link in my_profile_list:
        
            #my_driver = webdriver.Chrome()  #if you don''t have Chromedrive in the environment path then use the next line instead of this
            #my_driver = webdriver.Chrome()
            my_driver = webdriver.Firefox(executable_path=r''C:\Users\MyUser\DropBox\linkedInScrapper\geckodriver.exe'')
            #my_driver = webdriver.Chrome(executable_path=r''C:\Users\MyUser\DropBox\linkedInScrapper\chromedriver.exe'')
            #sending our driver as the driver to be used by srape_linkedin
            #you can also create driver options and pass it as an argument
            ps = ProfileScraper(cookie=myLI_AT_Key,default driver and scroll_pause time and scroll_increment made a little random
            print(''Currently scraping: '',datetime.Now())
            profile = ps.scrape(url=link)       #changed name
            dataJSON = profile.to_dict()
        
            profileName = re.sub(''https://www.linkedin.com/in/'','''',link)
            profileName = profileName.replace("?originalSubdomain=es","")
            profileName = profileName.replace("?originalSubdomain=pe","")
            profileName = profileName.replace("?locale=en_US","")
            profileName = profileName.replace("?locale=es_ES","")
            profileName = profileName.replace("?originalSubdomain=uk","")
            profileName = profileName.replace("/","")
        
            with open(os.path.join(os.getcwd(),''ScrapedLinkedInprofiles'',profileName + ''.json''),''w'') as json_file:
                json.dump(dataJSON,json_file)
                time.sleep(10 + random.randint(0,5))    #added randomness to the sleep time
            #this will close your browser at the end of every iteration
            my_driver.quit()
        
        
        
        print(''The first observation scraped was:'',my_profile_list[0:])
        print(''The last observation scraped was:'',my_profile_list[-1:])
        print(''END'')

我尝试了许多不同的方法来尝试让 webdriver.Chrome() 工作,但没有任何运气。我曾尝试使用 Chrome (chromedriver) 和 Firefox (geckodriver),尝试以多种不同的方式加载 selenium 包,但我一直收到错误 TypeError: ''WebDriver'' object is not callable

我下面的原始代码仍然有效。 (即它会打开 Google Chrome 浏览器并转到 my_profiles_list 中的每个配置文件,但我想使用上面的代码。

from datetime import datetime
from scrape_linkedin import ProfileScraper
import pandas as pd
import json
import os
import re
import time

my_profile_list = [''https://www.linkedin.com/in/williamhgates/'',''https://www.linkedin.com/in/ursula-von-der-leyen/'']
# To get LI_AT key
# Navigate to www.linkedin.com and log in
# Open browser developer tools (Ctrl-Shift-I or right click -> inspect element)
# Select the appropriate tab for your browser (Application on Chrome,Storage on Firefox)
# Click the Cookies dropdown on the left-hand menu,and select the www.linkedin.com option
# Find and copy the li_at value
myLI_AT_Key = ''INSERT LI_AT Key''
with ProfileScraper(cookie=myLI_AT_Key,scroll_increment = 50,scroll_pause = 0.8) as scraper:
    for link in my_profile_list:
        print(''Currently scraping: '',datetime.Now())
        profile = scraper.scrape(url=link)
        dataJSON = profile.to_dict()
        
        profileName = re.sub(''https://www.linkedin.com/in/'',link)
        profileName = profileName.replace("?originalSubdomain=es","")
        profileName = profileName.replace("?originalSubdomain=pe","")
        profileName = profileName.replace("?locale=en_US","")
        profileName = profileName.replace("?locale=es_ES","")
        profileName = profileName.replace("?originalSubdomain=uk","")
        profileName = profileName.replace("/","")
        
        with open(os.path.join(os.getcwd(),''w'') as json_file:
            json.dump(dataJSON,json_file)
            time.sleep(10)
            
print(''The first observation scraped was:'',my_profile_list[0:])
print(''The last observation scraped was:'',my_profile_list[-1:])
print(''END'')

注意事项:

代码略有不同,因为我在 SO here 上提出了一个问题,@Ananth 帮助我给出了解决方案。

我也知道在线和 SO 存在与 seleniumchromedriver 相关的“类似”问题,但在尝试了每个建议的解决方案后,我仍然无法使其正常工作。 (即常见的解决方案是 webdriver.Chrome() 中的拼写错误)。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

Selenium WebDriver-Java-单击一个按钮

Selenium WebDriver-Java-单击一个按钮

我试图单击一些按钮,并以抽搐为例使用“跟随”按钮。

我使用Selenium IDE尝试获取按钮的xpath。我得到的xpath是://span[@id=''ember637'']/a/span

如果我转到FireFox并为按钮复制唯一选择器,则会得到: .js-follow > span:nth-child(1)

我在Java程序中都尝试过,但它们都无效。当我使用//span[..xpath时,出现以下错误:

"Unable to locate a node using //span[@id=''ember637'']/a/span"

编辑:

我要单击的带有按钮的网站示例(“关注”按钮):http :
//www.twitch.tv/mradder89/profile/

我正在使用的Selenium jar文件是“ selenium-server-standalone-2.35.0.jar”

我得到的错误是

"Exception in thread "main" org.openqa.selenium.NoSuchElementException: Unable to locate a node using //span[@id=''ember637'']/a/span"

编辑2:

我下载了PhantomJSDriver
exe文件(phantomjs.exe)并正在尝试。它不起作用…我没有收到像以前一样的错误消息(“无法找到节点…”错误)。

这是代码:http :
//pastebin.com/GzvubMZr

答案1

小编典典

使用PhantomJSDriver,尝试其他定位器。如果有异常,则发布异常,否则,发布异常信息,例如位置,文本等。

driver.findElement(By.xpath("//*[contains(@class, ''profile-actions'')]//span[text()=''Follow'']")).click();driver.findElement(By.cssSelector(".profile-actions .primary_button > span")).click();

Selenium Webdriver-单击隐藏的元素

Selenium Webdriver-单击隐藏的元素

我正在尝试自动执行Google云端硬盘中的上传文件功能。

用于传递参数的元素以高度-0px隐藏。

用户操作均不会使该元素可见。因此,我需要一种变通方法来在不可见的元素上单击。

<input type="file"multiple=""/>

上述元素的xpath是-

//*[@goog-menu goog-menu-vertical uploadmenu density-tiny'']/input

我在用

WebDriver.findElement(By.xpath(<xpath>).sendKeys(<uploadFile>)

例外-

org.openqa.selenium.ElementNotVisibleException
  • 元素当前不可见,因此可能无法与之交互。

我尝试使用JavascriptExecutor。但是找不到确切的语法。

答案1

小编典典

试试这个:

WebElement elem = yourWebDriverInstance.findElement(By.xpath("//*[@goog-menu goog-menu-vertical uploadmenu density-tiny'']/input"));String js = "arguments[0].style.height=''auto''; arguments[0].style.visibility=''visible'';";((JavascriptExecutor) yourWebDriverInstance).executeScript(js, elem);

上面的那堆将改变文件输入控件的可见性。然后,您可以继续执行文件上传的常规步骤,例如:

elem.sendKeys("<LOCAL FILE PATH>");

请注意,通过更改输入字段的可见性,您可以干预要测试的应用程序。注入脚本来改变行为是侵入性的,在测试中不建议这样做。

关于在警报Selenium Webdriver上单击“确定”警报已触发的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于JavaSelenium Webdriver:修改navigator.webdriver标志以防止selenium检测、Selenium WebDriver 在导入 selenium 时不可调用错误但在不导入 selenium 时有效、Selenium WebDriver-Java-单击一个按钮、Selenium Webdriver-单击隐藏的元素的相关信息,请在本站寻找。

本文标签: