Submitted by lucasd on 2010/11/27 18:41
Hello all,
 
Now that I've gotten over the initial hump of linking to an external database (via ODBC), I'm still having trouble tweaking the process (I figured I might as well start a new thread, because my initial question about ODBC was very vague).
 
Here's the issue: With the sample ODBC database ("SQLNotesIssuesTracking"), I did an experiment. I deleted all the entries in the sample database, then did a sync. But only the first record in the database (the issue titled "Outlook-style Calendar"), reappeared when I ran the sync. In other words, the actions of deleting all and then re-syncing lead to a database of only one record.
 
The same thing is happening with my connection to my Zotero sqlite file --- I can get only one record out of over a thousand.
 
Is the behaviour I describe above for the sample database expected? It occurred to me that maybe it was necessary to have existing items to sync to (rather than a blank database), but creating additional blank items did nothing --- I still can get only one record after clearing the sample database.
 
Any pointers would be great -- like I said in my other post, I'm totally new to SQL, so I may be misunderstanding something obvious. But I'm figuring I should first figure out what's happening with the sample database, and then I should be able to apply what I learn to my own database.
 
Thanks,
Lucas
 
 

Comments

I may have found the issue. Can you try to do a Repair before doing the Sync ?
 
 

lucasd

2010/11/27 21:45

In reply to by Pierre_Admin

Okay, thanks Pierre, that worked for the sample database. I'm still having the same issue with my own database, but this helps me narrow down what the problem is. I'll do some more fiddling and report back. 
 

Pierre_Admin

2010/11/27 21:49

In reply to by lucasd

If you can, post your current synch file. I'll be able to try it out here.
 

lucasd

2010/11/27 22:04

In reply to by Pierre_Admin

Okay, I'm not sure how useful it is without access to the database, but here's my current synch file (I'm using a simplified one for testing):
 
<ConnectionString>

Driver={SQLite3 ODBC Driver};

DSN=ztester;

Database=C:\Documents and Settings\Lucas\My Documents\ztester.sqlite;


</ConnectionString>


<FieldMap>

[title]    item

[itemid]    iditem_import

[datemodified]    dateupdate

[key]    b26_zotkey

</FieldMap>

<Source>

select items.itemid, itemdatavalues.value AS title, items.key, items.datemodified

from

items

join itemdata
on items.itemid=itemdata.itemid

join itemdatavalues
on itemdata.valueid=itemdatavalues.valueid

where itemdata.fieldid=110

limit 10
;

</source>

<RecordIDField>iditem_import</RecordIDField>
<RecordStatus>IDimportType    new    update</RecordStatus>
<RecordDateField>dateupdate</RecordDateField>
<LastUpdate>2006-05-17 21:10:48</LastUpdate>
 
When I run a synch from my database, I get the dialogue asking if I want to proceed with importing the 1000+ records, but then nothing happens. In fact, now I'm not even getting a single import (I'm not quite sure which setting I had used before to get one item in).
 

lucasd

2010/11/27 22:11

In reply to by Pierre_Admin

I don't know if it's related, but I did some playing around with the sample database synch file, and I ended up getting an identical behaviour to what I'm experiencing in my own database. Here's a heavily cut version of the sample synch file:
 
<ConnectionString>
Driver={MySQL ODBC 3.51 Driver};Server=mysql408.ixwebhosting.com;Port=3306;Database=NeoTech_mySQLMantis;User=NeoTech_MantisVW;Option=3;</ConnectionString>

<Password>ÙÞÚÌÎÏ</Password>

<FieldMap>

[id]    MantisBugID
[summary]    item
[last_updated]    LastUpdated

</FieldMap>

<Source>

select mantis_bug_table.id, mantis_bug_table.summary, mantis_bug_table.last_updated

FROM mantis_bug_table

;
</source>

<RecordIDField>MantisBugID</RecordIDField>
<RecordStatus>IDImportType    New    Update</RecordStatus>
<RecordDateField>LastUpdated</RecordDateField>
<LastUpdate>2009-05-17 21:10:48</LastUpdate>
 
When I run a synch using the above synch file, I get the dialogue asking if I want to import all the items, but then nothing happens. Is there an obvious reason for this?
 

Pierre_Admin

2010/11/27 22:50

In reply to by lucasd

Hi Lucas,
I found the issue
 
You need this line in the fieldmap section:
IMPORT    IDImportType
 
So the correct synch file should be:
 
<ConnectionString>Driver={MySQL ODBC 3.51 Driver};Server=mysql408.ixwebhosting.com;Port=3306;Database=NeoTech_mySQLMantis; User=NeoTech_MantisVW;Option=3;</ConnectionString>
<Password>ÙÞÚÌÎÏ</Password> 
<FieldMap>
[id]    MantisBugID 
[summary]    item 
[last_updated]    LastUpdated 
IMPORT    IDImportType 
</FieldMap> 
<Source>SELECT mantis_bug_table.id, mantis_bug_table.summary, mantis_bug_table.last_updated FROM mantis_bug_table 
WHERE (((mantis_bug_table.last_updated) > '[LASTUPDATE]')); 
</source> 
<RecordIDField>MantisBugID</RecordIDField> 
<RecordStatus>IDImportType    New    Update</RecordStatus> 
<RecordDateField>LastUpdated</RecordDateField> 
<LastUpdate>2007-11-26 15:19:42</LastUpdate>

The reason is that the <RecordStatus> line makes reference to the IDImportType, but it is not in the FieldMap section. I can look into the reason for this requirement, but right now, this is the way it works !

 
HTH !

lucasd

2010/11/27 23:09

In reply to by Pierre_Admin

Okay, thanks, everything's working now! Actually, I not only had to add an "IMPORT..." line to my field mapping, but I also had to add an equivalent of the "1 MantisIssue" line. I now understand that this refers to the "Yes/No" checkbox for inclusion in the relevant IQ grid. We should definitely include a note about this in the Help file.


Thanks again for your help --- this software is amazing.



lucasd

2010/11/30 17:14

In reply to by lucasd

A couple updates:
 
(Note: I now realize it was pointless to split my ODBC posts into two threads; for reference, note that the original thread is here.)
 
I now have IQ communicating successfully with my Zotero database, although I'm only able to import/update about 20 records at a time. I'm guessing this is a limitation of the Sqlite ODBC driver I am using rather than a limitation of IQ.
 
For anyone interested, here is my sqlsynch.xml code (probably a bit messy, but not too bad for a SQL first-timer, I hope):
 

<ConnectionString>

Driver={SQLite3 ODBC Driver};

C:\Documents and Settings\Lucas\Application Data\Mozilla\Firefox\Profiles\rycv3g5o.default\zotero\zotero.sqlite;

</ConnectionString>

<FieldMap>

[Citation]    item

[itemid]    zotero_id

[datemodified]    dateupdate

[Type]    z12_ItemType

[Title]    z4_Title

[Author(s)]    z1a_Authors

[Editor(s)]    z1b_Editors

[Year]    z3_Year

[Book Title]    z13_BookTitle

[Journal]    z15_Journal

[Volume]    z16_Volume

[Issue]    z17_Issue

[Notes]    z20_Notes

[Abstract]    z19_Abstract

[Thesis Type]    z18_Diss_type

[Place]    z22_Place

[Publisher]    z23_Publisher

[Pages]    z21_Pages

[Tags]    z27_Tags

[Collections]    z28_Collections

[parentCollectionID]    z29_collectionparentID

[Attachments]    z30_Attachments

[Related Sources]    z31_Related_Sources

[Related Notes]    z32_Related_Notes

IMPORT    IDImportType

1    Bibliography

</FieldMap>

<Source>

select items.itemid,

"<B>"||
group_concat(distinct
coalesce(
kingdata0a.lastname, kingdata0e.lastname, kingdata1a.lastname, kingdata1e.lastname, kingdata2a.lastname, kingdata2e.lastname, kingdata3a.lastname, kingdata3e.lastname," ")
)
||

coalesce(
standalonenotes.title," ")
||
", "||
substr(
coalesce(
kingdata0a.firstname, kingdata0e.firstname, kingdata1a.firstname, kingdata1e.firstname, kingdata2a.firstname, kingdata2e.firstname, kingdata3a.firstname, kingdata3e.firstname," ")
, 1, 1)
||
"</B>"
||
". "
||
substr(coalesce(yearvalues.value,"[ ]"), 1, 4)
||
". "
||
"<I>"
||coalesce(titlevalues.value," ")
||
"</I>"
||
coalesce(standalonenotes.note," ")
as Citation,

itemtypes.typename as Type,

titlevalues.value AS Title,

group_concat(
 AuthorMainCreatorData.lastname
||
(select ", " where authormaincreatordata.firstname is not null) ||
coalesce(
AuthorMainCreatorData.firstname
,""),"; ")
as [Author(s)],

group_concat(
(select distinct EditorMainCreatorData.lastname)
||
(select ", " where editormaincreatordata.firstname is not null) ||
coalesce(
EditorMainCreatorData.firstname
,""),"; ")
 as [Editor(s)],


substr(coalesce(yearvalues.value,"[ ]"), 1, 4) as Year, booktitlevalues.value as [Book Title], journaldatavalues.value as Journal, journalvolumedatavalues.value as Volume, journalissuedatavalues.value as Issue, group_concat(itemnotes.note) as Notes, abstractvalues.value as Abstract, thesisvalues.value as [Thesis Type], placedatavalues.value as Place, publisherdatavalues.value as Publisher, pagesdatavalues.value as Pages, group_concat(distinct tags.name) as Tags, group_concat(distinct collections.collectionName) as Collections, collections.parentcollectionid, group_concat(distinct itemAttachments.path) as Attachments,

group_concat(distinct
coalesce(
seealsooutTitledatavalues.value," "))
 ||
group_concat(distinct
coalesce(
seealsoinTitledatavalues.value," "))
as [Related Sources],

group_concat(distinct
coalesce(
outnotes.title," "))
||
group_concat(distinct
coalesce(
innotes.title," "))
as [Related Notes],

items.datemodified

FROM

items

left join itemtypes
on items.itemtypeid=itemtypes.itemtypeid

left join itemnotes
on items.itemid=itemnotes.sourceitemid

left join itemnotes as StandaloneNotes
on items.itemid=StandaloneNotes.itemid and StandaloneNotes.sourceitemid is null

left join itemcreators
on items.itemid=itemcreators.itemid

left join creators
on itemcreators.creatorid=creators.creatorid

left join creatordata
on creators.creatordataid=creatordata.creatordataid

left join creatordata as EditorMainCreatorData
on creators.creatordataid=EditorMainCreatorData.creatordataid and itemcreators.creatortypeid=3

left join creatordata as AuthorMainCreatorData
on creators.creatordataid=AuthorMainCreatorData.creatordataid and itemcreators.creatortypeid=1

left join itemcreators as itemcreators0
on items.itemid=itemcreators0.itemid and itemcreators0.orderindex=0

left join creators as creators0
on itemcreators0.creatorid=creators0.creatorid

left join creatordata as creatordata0
on creators0.creatordataid=creatordata0.creatordataid

left join itemcreators as itemcreators1
on items.itemid=itemcreators1.itemid and itemcreators1.orderindex=1

left join creators as creators1
on itemcreators1.creatorid=creators1.creatorid

left join creatordata as creatordata1
on creators1.creatordataid=creatordata1.creatordataid


left join itemcreators as itemcreators2
on items.itemid=itemcreators2.itemid and itemcreators2.orderindex=2

left join creators as creators2
on itemcreators2.creatorid=creators2.creatorid

left join creatordata as creatordata2
on creators2.creatordataid=creatordata2.creatordataid


left join itemcreators as itemcreators3
on items.itemid=itemcreators3.itemid and itemcreators3.orderindex=3

left join creators as creators3
on itemcreators3.creatorid=creators3.creatorid

left join creatordata as creatordata3
on creators3.creatordataid=creatordata3.creatordataid

left join creatordata as Kingdata0a
on
(creators0.creatordataid=Kingdata0a.creatordataid and itemcreators0.creatortypeid=1)

left join creatordata as Kingdata0e
on
(creators0.creatordataid=Kingdata0e.creatordataid and (itemcreators1.creatortypeid!=1 OR itemcreators1.creatortypeid is null) AND (itemcreators2.creatortypeid!=1 OR itemcreators2.creatortypeid is null) AND (itemcreators3.creatortypeid!=1 OR itemcreators3.creatortypeid is null) AND itemcreators0.creatortypeid=3)

left join creatordata as Kingdata1a
on
(creators1.creatordataid=Kingdata1a.creatordataid and itemcreators1.creatortypeid=1)

left join creatordata as Kingdata1e
on
(creators1.creatordataid=Kingdata1e.creatordataid and itemcreators0.creatortypeid!=1 AND itemcreators2.creatortypeid!=1 AND itemcreators3.creatortypeid!=1 AND itemcreators1.creatortypeid=3)


left join creatordata as Kingdata2a
on
(creators2.creatordataid=Kingdata2a.creatordataid and itemcreators2.creatortypeid=1)

left join creatordata as Kingdata2e
on
(creators2.creatordataid=Kingdata2e.creatordataid and itemcreators0.creatortypeid!=1 AND itemcreators1.creatortypeid!=1 AND itemcreators3.creatortypeid!=1 AND itemcreators2.creatortypeid=3)


left join creatordata as Kingdata3a
on
(creators3.creatordataid=Kingdata3a.creatordataid and itemcreators3.creatortypeid=1)

left join creatordata as Kingdata3e
on
(creators3.creatordataid=Kingdata3e.creatordataid and itemcreators0.creatortypeid!=1 AND itemcreators1.creatortypeid!=1 AND itemcreators2.creatortypeid!=1 AND itemcreators3.creatortypeid=3)


left join itemdata as abstractdata
on items.itemid=abstractdata.itemid and abstractdata.fieldid=90

left join itemdatavalues as abstractvalues
on abstractdata.valueid=abstractvalues.valueid


left join itemdata as booktitledata
on items.itemid=booktitledata.itemid and booktitledata.fieldid=115

left join itemdatavalues as booktitlevalues
on booktitledata.valueid=booktitlevalues.valueid

left join itemdata as yeardata
on items.itemid=yeardata.itemid and yeardata.fieldid=14

left join itemdatavalues as yearvalues
on yeardata.valueid=yearvalues.valueid

left join itemdata as titledata
on items.itemid=titledata.itemid and titledata.fieldid=110

left join itemdatavalues as titlevalues
on titledata.valueid=titlevalues.valueid

left join itemdata as thesisdata
on items.itemid=thesisdata.itemid and thesisdata.fieldid=69

left join itemdatavalues as thesisvalues
on thesisdata.valueid=thesisvalues.valueid

left join itemdata as journaldata
on items.itemid=journaldata.itemid and journaldata.fieldid=12

left join itemdatavalues as journaldatavalues
on journaldata.valueid=journaldatavalues.valueid

left join itemdata as journalvolumedata
on items.itemid=journalvolumedata.itemid and journalvolumedata.fieldid=4

left join itemdatavalues as journalvolumedatavalues
on journalvolumedata.valueid=journalvolumedatavalues.valueid

left join itemdata as journalissuedata
on items.itemid=journalissuedata.itemid and journalissuedata.fieldid=5

left join itemdatavalues as journalissuedatavalues
on journalissuedata.valueid=journalissuedatavalues.valueid

left join itemdata as placedata
on items.itemid=placedata.itemid and placedata.fieldid=7

left join itemdatavalues as placedatavalues
on placedata.valueid=placedatavalues.valueid

left join itemdata as publisherdata
on items.itemid=publisherdata.itemid and publisherdata.fieldid=8

left join itemdatavalues as publisherdatavalues
on publisherdata.valueid=publisherdatavalues.valueid

left join itemdata as pagesdata
on items.itemid=pagesdata.itemid and pagesdata.fieldid=10

left join itemdatavalues as pagesdatavalues
on pagesdata.valueid=pagesdatavalues.valueid

left join itemtags
on items.itemid=itemtags.itemid

left join tags
on itemtags.tagid=tags.tagid

left join collectionItems
on items.itemid=collectionItems.itemid

left join collections
on collectionItems.collectionID=collections.collectionID

left join itemattachments
on items.itemid=itemattachments.sourceitemid

left join itemseealso as SeeAlsoOut
on items.itemid=SeeAlsoOut.itemid

left join itemseealso as SeeAlsoIn
on items.itemid=SeeAlsoIn.linkeditemid

left join itemdata as seealsooutTitledata
on seealsoout.linkeditemid=seealsooutTitledata.itemid and seealsooutTitledata.fieldid=110

left join itemdata as seealsoinTitledata
on seealsoin.itemid=seealsoinTitledata.itemid and seealsoinTitledata.fieldid=110

left join itemdatavalues as seealsooutTitledatavalues
on seealsooutTitledata.valueid=seealsooutTitledatavalues.valueid

left join itemdatavalues as seealsoinTitledatavalues
on seealsoinTitledata.valueid=seealsoinTitledatavalues.valueid

left join itemnotes as outnotes
on seealsoout.linkeditemid=outnotes.itemid

left join itemnotes as innotes
on seealsoin.itemid=innotes.itemid

where items.itemtypeid!=14

group by items.itemid

;

</source>

<RecordIDField>zotero_id</RecordIDField>
<RecordStatus>IDimportType    ADD    update</RecordStatus>
<RecordDateField>dateupdate</RecordDateField>
<LastUpdate>2010-11-28 07:19:15</LastUpdate>

 

---

lucasd

2010/11/30 18:53

In reply to by lucasd

Okay, I've now solved the problem with only getting 10 items at a time --- now I can update the entire database at once. The problem turned out to be with my Zotero notes. The way I had it set up, sometimes a dozen notes of 100 words each would all be jammed into a single "Notes" cell in the row of the corresponding reference item. So, to get synching working properly, I just disabled the synching of notes. I'll keep playing around with it and see if there's still a good way to get the notes data.
 
 

lucasd

2010/11/30 19:00

In reply to by lucasd

Just had a thought: right now item HTML is read only, but wouldn't it make sense for it to be possible to import data into the HTML pane? This would be an especially good solution in my case, because my Zotero notes are already in HTML (that's how Zotero is set up), so if the notes data synched/imported to the HTML pane, I wouldn't have to look at lots of HTML tags when I read my notes. (Alternatively, perhaps there could be a preference to enable HTML in other columns?)
 

Armando

2010/11/30 19:18

In reply to by lucasd

Thanks a lot for sharing that Lucas !
It's very nice, and should be very useful.
I'll certainly try it when I have more time.
 
 

I've now discovered that when synching with my Zotero database, text in each cell is getting cut off after 255 characters.
 
Is this the expected behavior?
 

lucasd

2010/12/01 11:10

In reply to by lucasd

One further detail: I tried tweaking the ODBC synch so that my Zotero notes, some of which are long, become separate items in IQ, with note text getting copied to the main "Item" field. But even in this case, the text of each item gets cut off around 255 characters.
 
Any suggestions?
 
Thanks.
 

Pierre_Admin

2010/12/01 12:19

In reply to by lucasd

Hi Lucas,
 
First, quite the SQL command you came up with ! I'm surprised that SQLite can handle one with so many joins !
 
I could easily add the possibility to save in the HTML, let me look into this.
 
As for the trunking, It's probably the driver, as IQ does not have such limits
 

lucasd

2010/12/01 14:47

In reply to by Pierre_Admin

 
[quote]First, quite the SQL command you came up with ! I'm surprised that SQLite can handle one with so many joins ![/quote]
 
Thanks :) .  I suppose I got a little carried away. Once I got to 64 joins, SQLite told me it wouldn't allow any more.
 
[quote]I could easily add the possibility to save in the HTML, let me look into this.[/quote]
 
That would be fantastic.
 
[quote]As for the trunking, It's probably the driver, as IQ does not have such limits[/quote]
 
OK, good to know. I suppose that means the same lmit would apply even if I were to synch to the HTML pane. But it's not a big deal --- I'll figure out a way to do a one-time import of my Zotero notes, and then in the future I'll just take notes directly in IQ. (Straightforward importing via CSV gets garbled, because the notes are long and include lots of commas and such, but I can probably figure out something using the Zotero export filters.)