使用Facebook Marketing API不会暂停广告见解

使用Facebook Marketing API不会暂停广告见解,第1张

使用Facebook Marketing API不会暂停广告见解

经过几天的挖掘,我终于想出了一个脚本,该脚本确实用于提取3年的Facebook广告见解,从而避免了Facebook API的速率限制。

首先,我们导入所需的库:

from facebookads.api import FacebookAdsApifrom facebookads.adobjects.adsinsights import AdsInsightsfrom facebookads.adobjects.adaccount import AdAccountfrom facebookads.adobjects.business import Businessimport datetimeimport csvimport re import pandas as pdimport numpy as npimport matplotlib as pltfrom google.colab import filesimport time

请注意,提取见解之后,我将它们保存在Google Cloud存储中,然后保存在Big Query表中。

access_token = 'my-token'ad_account_id = 'act_id'app_secret = 'app_s****'app_id = 'app_id****'FacebookAdsApi.init(app_id,app_secret, access_token=access_token, api_version='v3.2')account = AdAccount(ad_account_id)

然后,以下脚本调用api并检查我们确实达到的速率限制:

import loggingimport requests as rq#Function to find the string between two strings or charactersdef find_between( s, first, last ):    try:        start = s.index( first ) + len( first )        end = s.index( last, start )        return s[start:end]    except ValueError:        return ""#Function to check how close you are to the FB Rate Limitdef check_limit():    check=rq.get('https://graph.facebook.com/v3.1/'+ad_account_id+'/insights?access_token='+access_token)    usage=float(find_between(check.headers['x-ad-account-usage'],':','}'))    return usage

现在,这是您可以运行以提取最近X天的数据的整个脚本!

Y = number of days for x in range(1, Y):  date_0 = datetime.datetime.now() - datetime.timedelta(days=x )  date_ = date_0.strftime('%Y-%m-%d')  date_compact = date_.replace('-', '')  filename = 'fb_%s.csv'%date_compact  filelocation = "./"+ filename    # Open or create new file   try:      csvfile = open(filelocation , 'w+', 777)  except:      print ("Cannot open file.")  # To keep track of rows added to file  rows = 0  try:      # Create file writer      filewriter = csv.writer(csvfile, delimiter=',')      filewriter.writerow(['date','ad_name', 'adset_id', 'adset_name', 'campaign_id', 'campaign_name', 'clicks', 'impressions', 'spend'])  except Exception as err:      print(err)  # Iterate through all accounts in the business account  ads = account.get_insights(params={'time_range': {'since':date_, 'until':date_}, 'level':'ad' }, fields=[AdsInsights.Field.ad_name, AdsInsights.Field.adset_id, AdsInsights.Field.adset_name, AdsInsights.Field.campaign_id, AdsInsights.Field.campaign_name, AdsInsights.Field.clicks, AdsInsights.Field.impressions, AdsInsights.Field.spend ])  for ad in ads:    # Set default values in case the insight info is empty    date = date_    adsetid = ""    adname = ""    adsetname = ""    campaignid = ""    campaignname = ""    clicks = ""    impressions = ""    spend = ""    # Set values from insight data    if ('adset_id' in ad) :        adsetid = ad[AdsInsights.Field.adset_id]    if ('ad_name' in ad) :        adname = ad[AdsInsights.Field.ad_name]    if ('adset_name' in ad) :        adsetname = ad[AdsInsights.Field.adset_name]    if ('campaign_id' in ad) :        campaignid = ad[AdsInsights.Field.campaign_id]    if ('campaign_name' in ad) :        campaignname = ad[AdsInsights.Field.campaign_name]    if ('clicks' in ad) : # This is stored strangely, takes a few steps to break through the layers        clicks = ad[AdsInsights.Field.clicks]    if ('impressions' in ad) : # This is stored strangely, takes a few steps to break through the layers        impressions = ad[AdsInsights.Field.impressions]    if ('spend' in ad) :        spend = ad[AdsInsights.Field.spend]    # Write all ad info to the file, and increment the number of rows that will display    filewriter.writerow([date_, adname, adsetid, adsetname, campaignid, campaignname, clicks, impressions, spend])    rows += 1  csvfile.close()# Print report  print (str(rows) + " rows added to the file " + filename)  print(check_limit(), 'reached of rate limit')## write to GCS and BQ  blob = bucket.blob('fb_2/fb_%s.csv'%date_compact)  blob.upload_from_filename(filelocation)  load_job_config = bigquery.LoadJobConfig()  table_name = '0_fb_ad_stats_%s' % date_compact  load_job_config.write_disposition = 'WRITE_TRUNCATE'  load_job_config.skip_leading_rows = 1  # The source format defaults to CSV, so the line below is optional.  load_job_config.source_format = bigquery.SourceFormat.CSV  load_job_config.field_delimiter = ','  load_job_config.autodetect = True  uri = 'gs://my-project/fb_2/fb_%s.csv'%date_compact  load_job = bq_client.load_table_from_uri(    uri,    dataset.table(table_name),    job_config=load_job_config)  # API request  print('Starting job {}'.format(load_job.job_id))  load_job.result()  # Waits for table load to complete.  print('Job finished.')  if (check_limit()>=75):    print('75% Rate Limit Reached. Cooling Time 5 Minutes.')    logging.debug('75% Rate Limit Reached. Cooling Time Around 3 Minutes And Half.')    time.sleep(225)

这确实可以很好地工作,但是请注意,如果您打算提取3年的数据,该脚本将需要大量时间来运行!

我要感谢LucyTurtle和Ashish

Baid的脚本在我的工作中对我有帮助!

如果您需要更多详细信息,或者需要提取一天中不同广告帐户的数据,请参阅此帖子:

Facebook Marketing API-通过Python获取见解-
达到用户请求限制



欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5663391.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存